Entity

Time filter

Source Type


Wang F.-Y.,CAS Institute of Automation
IEEE Transactions on Intelligent Transportation Systems | Year: 2010

Parallel control and management have been proposed as a new mechanism for conducting operations of complex systems, especially those that involved complexity issues of both engineering and social dimensions, such as transportation systems. This paper presents an overview of the background, concepts, basic methods, major issues, and current applications of Parallel transportation Management Systems (PtMS). In essence, parallel control and management is a data-driven approach for modeling, analysis, and decision-making that considers both the engineering and social complexity in its processes. The developments and applications described here clearly indicate that PtMS is effective for use in networked complex traffic systems and is closely related to emerging technologies in cloud computing, social computing, and cyberphysical-social systems. A description of PtMS system architectures, processes, and components, including OTSt, Dyna CAS, aDAPTS, iTOP, and TransWorld is presented and discussed. Finally, the experiments and examples of real-world applications are illustrated and analyzed. © 2006 IEEE. Source


Wang Z.,CAS Institute of Automation
Optics Express | Year: 2015

The calibration of computer vision systems that contain the camera and the projector usually utilizes markers of the well-designed patterns to calculate the system parameters. Undesirably, the noise and radial distortion exist universally, which decreases the calibration accuracy and consequently decreases the measurement accuracy of the related technology. In this paper, a method is proposed to remove the noise and radial distortion by registering the captured pattern with an ideal pattern. After the optimal modeled pattern is obtained by registration, the degree of freedom of the total calibration markers is reduced to one and both the noise and radial distortion are removed successfully. The accuracy improvement in a structured light scanning system is over 1024 order of magnitude in the sense of mean square errors. Most importantly, the proposed method can be readily adopted by the computer vision techniques that use projectors or cameras. © 2015 Optical Society of America. Source


Wang F.-Y.,CAS Institute of Automation
IEEE Intelligent Systems | Year: 2010

Internet use and cyberspace activities have created an overwhelming demand for the rapid development and application of Cyber-Physical-Social Systems (CPSS), raising compelling technological, economic, and social implications. This inaugural department begins by addressing the philosophical and scientific foundation of CPSS. Any study of CPSS must be conducted with a multidisciplinary approach involving the physical, social, and cognitive sciences and that AI-based intelligent systems will be key to any successful construction and deployment. © 2010 IEEE. Source


Shen S.,CAS Institute of Automation
IEEE Transactions on Image Processing | Year: 2013

In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets. © 1992-2012 IEEE. Source


Zhang Z.,CAS Institute of Automation | Tao D.,Intelligent Systems Technology, Inc.
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2012

Slow Feature Analysis (SFA) extracts slowly varying features from a quickly varying input signal [1]. It has been successfully applied to modeling the visual receptive fields of the cortical neurons. Sufficient experimental results in neuroscience suggest that the temporal slowness principle is a general learning principle in visual perception. In this paper, we introduce the SFA framework to the problem of human action recognition by incorporating the discriminative information with SFA learning and considering the spatial relationship of body parts. In particular, we consider four kinds of SFA learning strategies, including the original unsupervised SFA (U-SFA), the supervised SFA (S-SFA), the discriminative SFA (D-SFA), and the spatial discriminative SFA (SD-SFA), to extract slow feature functions from a large amount of training cuboids which are obtained by random sampling in motion boundaries. Afterward, to represent action sequences, the squared first order temporal derivatives are accumulated over all transformed cuboids into one feature vector, which is termed the Accumulated Squared Derivative (ASD) feature. The ASD feature encodes the statistical distribution of slow features in an action sequence. Finally, a linear support vector machine (SVM) is trained to classify actions represented by ASD features. We conduct extensive experiments, including two sets of control experiments, two sets of large scale experiments on the KTH and Weizmann databases, and two sets of experiments on the CASIA and UT-interaction databases, to demonstrate the effectiveness of SFA for human action recognition. Experimental results suggest that the SFA-based approach 1) is able to extract useful motion patterns and improves the recognition performance, 2) requires less intermediate processing steps but achieves comparable or even better performance, and 3) has good potential to recognize complex multiperson activities. © 2012 IEEE. Source

Discover hidden collaborations