Entity

Time filter

Source Type


Zhou J.,National University of Defense Technology | Zhang X.-H.,Hunan Key Laboratory of Videometrics and Vision Navigation
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2011

Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient. © 2011 SPIE. Source


Zhang H.,National University of Defense Technology | Zhang H.,Hunan Key Laboratory of Videometrics and Vision Navigation | Guo P.,National University of Defense Technology | Guo P.,Hunan Key Laboratory of Videometrics and Vision Navigation
International Conference on Signal Processing Proceedings, ICSP | Year: 2015

Attitude error compensation is difficult for marine strapdown inertial navigation system (SINS) because of the marine vehicle's moving or rocking. This paper proposes a method that compensates the marine SINS attitude error based on vision measurement. The relation between the vision level attitude and the sea-sky-line is deduced based on the geometry of vision. The sea-sky-line is detected from images to measure the level attitude of the marine vision system. By tracking a far reference object in images the vision azimuth change can be calculated. Using the vision level attitude and azimuth change observations, an extended-kalman-filter (EKF) is designed to estimate and compensate the SINS attitude error. Simulations of error compensation for rocking SINS were performed. Results show that the proposed method works well and after compensation the SINS attitude comes to a high precision. The method in this paper gives a feasible way for marine SINS attitude error compensation in alignment or navigation. © 2014 IEEE. Source


Liu J.-B.,National University of Defense Technology | Liu J.-B.,Hunan Key Laboratory of Videometrics and Vision Navigation | Zhang X.-H.,National University of Defense Technology | Zhang X.-H.,Hunan Key Laboratory of Videometrics and Vision Navigation | And 2 more authors.
Guangzi Xuebao/Acta Photonica Sinica | Year: 2016

Based on the relationship between infinite points and camera intrinsic parameters, a calibration method for camera intrinsic parameters and image distortion was proposed, which can realize a decoupled calibration among intrinsic parameters, image distortion and extrinsic parameters. First, solving camera intrinsic parameters from the projective equations of image of absolute quartic. Second, taking a reprojected point as an ideal image point and solving image distortion coefficient. Third, taking homography among sequence images as parameters to be optimized and taking reprojection error as cost function, then the final outputting results include camera intrinsic parameters and image distortion coefficients. Under the identical configurations, the calibration precisions of the proposed method and Zhang's method were simulated respectively and compared together. By using the 27 views of planar objects with different pose, the calibration experiment for the camera intrinsic parameters was conducted by the proposed method and Zhang's method respectively. The results show that, the calibration precision of the proposed method is improved at least 1% to compare with Zhang's method. In the calibration experiment, the reprojection error of Zhang's method and proposed method is 0.064 pixel and 0.008 pixel respectively. © 2016, Science Press. All right reserved. Source


Zhang X.,National University of Defense Technology | Zhang X.,Hunan Key Laboratory of Videometrics and Vision Navigation | Zhang Z.,Nanyang Technological University | Li Y.,National University of Defense Technology | And 6 more authors.
Applied Optics | Year: 2012

We address the model-to-image registration problem with line features in the following two ways. (a)We present a robust solution to simultaneously recover the camera pose and the three-dimensional-to-twodimensional line correspondences. With weak pose priors, our approach progressively verifies the pose guesses with a Kalman filter by using a subset of recursively found match hypotheses. Experiments show our method is robust to occlusions and clutter. (b) We propose a new line feature based pose estimation algorithm, which iteratively optimizes the objective function in the object space. Experiments show that the algorithm has strong robustness to noise and outliers and that it can attain very accurate results efficiently. © 2012 Optical Society of America. Source


Xia Y.,National University of Defense Technology | Xia Y.,Hunan Key Laboratory of Videometrics and Vision Navigation | Xiangyi S.,National University of Defense Technology | Xiangyi S.,Hunan Key Laboratory of Videometrics and Vision Navigation | And 3 more authors.
Optics Communications | Year: 2013

One step phase shifting method for Electronic speckle pattern interferometry (ESPI) is an effective way for dynamic phenomenon analysis. But all the present methods need prior processes to get the reference phase field either by the traditional phase shifting method or by the image-sequence analyzing method. This paper presents a direct phase estimating method using only two interferograms, with no need of any prior information. By reasonably assuming that the background, the modulation and the phase changes are constant in local areas, the whole phase change field modulated by π with sign ambiguity can be estimated. Then through analyzing the phase change values, the whole sign field can be determined. With the help of the sign field, the modulation of the phase changes can be removed by phase unwrapping methods. The effectiveness of this method has been proved by real experimental tests. © 2012 Elsevier B.V. All rights reserved. Source

Discover hidden collaborations