Luo D.,Key Laboratory of Machine Perception |
Luo D.,Peking University |
Ding Y.,Key Laboratory of Machine Perception |
Ding Y.,Peking University |
And 5 more authors.
2014 IEEE International Conference on Mechatronics and Automation, IEEE ICMA 2014 | Year: 2014
Stand-up motion is among the most essential behaviors for humanoid robots. For achieving stable stand-up behavior, the traditional key-frame based motion planning methods are time-exhausted and expert knowledge dependent. On the other hand, classic trial-and-error based learning methods are inefficient due to the high degrees of freedom (DOFs) for humanoid robots and the difficulty in fixing appropriate reward functions. In this paper, a multi-stage learning approach is proposed to address the above issues. At the first stage, under a trajectory based motion control model, key motion frames sampled from human motion capture data (HMCD) are used for model initialization, through which the solution space could be pruned. At the second stage, the design of experiments (DOE) technique is introduced for fast and active searching in the pruned solution space. At the last stage, a refining process that adopts a stochastic gradient learning strategy is performed to achieve the final behavior. Under this three-stage learning framework, along with a simple heuristic reward function, the learning of the stand-up behavior for a kid-size humanoid robot is fulfilled successfully and efficiently. © 2014 IEEE.
Zhang J.,Key Laboratory of Embedded System and Service Computing |
Zhang J.,Tongji University |
Tan Y.,Key Laboratory of Machine Perception |
Tan Y.,Peking University |
And 8 more authors.
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | Year: 2010
Particle swarm optimizer (PSO) is a stochastic global optimization technique based on a social interaction metaphor. Because of the complexity, dynamics and randomness involved in PSO, it is hard to theoretically analyze the mechanism on which PSO depends. Statistical results have shown that the probability distribution of PSO is a truncated triangle, with uniform probability across the middle that decreases on the sides. The "truncated triangle" is also called the "Maya pyramid" by Kennedy. However, very little is known regarding the sampling distribution of PSO in itself. In this paper, we theoretically analyze the "Maya pyramid" without any assumption and derive its computational formula, which is actually a hybrid uniform distribution that looks like a trapezoid and conforms with the statistical results. Based on the derived density function of the hybrid uniform distribution, the search strategy of PSO is defined and quantified to characterize the mechanism of the search strategy in PSO. In order to show the significance of these definitions based, on the derived hybrid uniform distribution, the comparison between the defined search strategies of the classical, linear decreasing weight based PSO and the canonical constricted PSO suggested by Clerc is illustrated and elaborated. Copyright © 2010 The Institute of Electronics, Information and Communication Engineers.
Wang Z.,Peking University |
Yuan X.,Key Laboratory of Machine Perception |
Yuan X.,Peking University
2014 International Conference on Big Data and Smart Computing, BIGCOMP 2014 | Year: 2014
In this paper, we propose using timelines for 2D trajectory comparison. Trajectories directly rendered on a map do not show temporal information well, and are cluttered and unaligned. This make them difficult to compare. We convert trajectories to timelines, which naturally shows time, and are more compact and easy to align. In addition to simply showing how an attribute varies along the time, we further propose some novel timelines to show spatial-temporal features. We provide some use cases to show the benefit of our method. © 2014 IEEE.
Yin N.,Peking University |
Yin N.,Key Laboratory of Machine Perception |
Wang S.,Peking University |
Wang S.,Key Laboratory of Machine Perception |
And 4 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2014
In many applications, the data in time series appears highly periodic, but never exactly repeats itself. Such series are called pseudo periodic time series. The prediction of pseudo periodic time series is an important and nontrivial problem. Since the period interval is not fixed and unpredictable, errors will accumulate when traditional periodic methods are employed. Meanwhile, many time series contain a vast number of abnormal variations. These variations can neither be simply filtered out nor predicted by its neighboring points. Given that no specific method is available for pseudo periodic time series as of yet, the paper proposes a segment-wise method for the prediction of pseudo periodic time series with abnormal variations. Time series are segmented by the variation patterns of each period in the method. Only the segment corresponding to the target time series is chosen for prediction, which leads to the reduction of input variables. At the same time, the choice of the value highly correlated to the points-to-be-predicted enhances the prediction precision. Experimental results produced using data sets of China Mobile and biomedical signals both prove the effectiveness of the segment-wise method in improving the prediction accuracy of the pseudo periodic time series. © Springer International Publishing Switzerland 2014.
Zhang H.-Y.,Key Laboratory of Machine Perception |
Zhang H.-Y.,Peking University |
Wang L.-W.,Key Laboratory of Machine Perception |
Wang L.-W.,Peking University |
And 2 more authors.
Ruan Jian Xue Bao/Journal of Software | Year: 2013
Probabilistic graphical models are powerful tools for compactly representing complex probability distributions, efficiently computing (approximate) marginal and conditional distributions, and conveniently learning parameters and hyperparameters in probabilistic models. As a result, they have been widely used in applications that require some sort of automated probabilistic reasoning, such as computer vision and natural language processing, as a formal approach to deal with uncertainty. This paper surveys the basic concepts and key results of representation, inference and learning in probabilistic graphical models, and demonstrates their uses in two important probabilistic models. It also reviews some recent advances in speeding up classic approximate inference algorithms, followed by a discussion of promising research directions. ©Copyright 2013, Institute of Software, the Chinese Academy of Sciences.
Liu Z.-Q.,Peking University |
Liu Z.-Q.,Key Laboratory of Machine Perception |
Li H.-Y.,Peking University |
Li H.-Y.,Key Laboratory of Machine Perception |
And 4 more authors.
Jisuanji Xuebao/Chinese Journal of Computers | Year: 2010
Nowadays, the process-driven method for information system construction is more and more widely used. In the process-driven method, the process model has significantly influence on the data model. Unfortunately, the existing data model anomalies detection methods are only about the anomalies of data model itself. These methods don't take the process model into account. Similarly, the process model verification methods also lack for consideration of data model. This paper analyses those anomalies and gives a basic classification of three classes of them. This paper also proposes a model called Data-process Graph (DP-graph) to build up linkage between the data model and process model. Based on DP-Graph, DPGT method is proposed to detect anomalies of data model for business process model. Experimental results show the high detection rate of the anomalies of the method.
Zhang J.,Tongji University |
Ni L.,Tongji University |
Ni L.,Shandong University of Science and Technology |
Xie C.,Tongji University |
And 3 more authors.
IEICE Transactions on Information and Systems | Year: 2011
This paper presents an adaptive magnification transformation based particle swarm optimizer (AMT-PSO) that provides an adaptive search strategy for each particle along the search process. Magnification transformation is a simple but very powerful mechanism, which is inspired by using a convex lens to see things much clearer. The essence of this transformation is to set a magnifier around an area we are interested in, so that we could inspect the area of interest more carefully and precisely. An evolutionary factor, which utilizes the information of population distribution in particle swarm, is used as an index to adaptively tune the magnification scale factor for each particle in each dimension. Furthermore, a perturbation-based elitist learning strategy is utilized to help the swarm's best particle to escape the local optimum and explore the potential better space. The AMT-PSO is evaluated on 15 unimodal and multimodal benchmark functions. The effects of the adaptive magnification transformation mechanism and the elitist learning strategy in AMT-PSO are studied. Results show that the adaptive magnification transformation mechanism provides the main contribution to the proposed AMT-PSO in terms of convergence speed and solution accuracy on four categories of benchmark test functions. © 2011 The Institute of Electronics, Information and Communication Engineers.
Luo Y.,Key Laboratory of Machine Perception |
Liu T.,Intelligent Systems Technology, Inc. |
Tao D.,Intelligent Systems Technology, Inc. |
Xu C.,Key Laboratory of Machine Perception
IEEE Transactions on Image Processing | Year: 2014
Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method. © 1992-2012 IEEE.