Hunan Key Laboratory of Videometrics and Vision Navigation

Changsha, China

Hunan Key Laboratory of Videometrics and Vision Navigation

Changsha, China

Time filter

Source Type

Sun C.,National University of Defense Technology | Sun C.,Hunan Key Laboratory of Videometrics and Vision Navigation | Liu H.-B.,National University of Defense Technology | Liu H.-B.,Hunan Key Laboratory of Videometrics and Vision Navigation | And 4 more authors.
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2017

The technique of the star image simulation is widely used to test star identification algorithms and the performance of star sensor on the ground. A novel INS data based approach to ship-borne star map simulation is put forward in this paper. The simulation procedure consists of three steps: Firstly, the exact speed and position of the ship in the Conventional Inertial System(CIS) are calculated via the INS data; and then the ship attitude matrix is obtained. Secondly, considering the azimuth angle and elevation angle of the star sensor, the accurate positions of the selected guide stars on the image plane of the star sensor are derived by constructing a pinhole model. At the third, the gray values of simulating star image pixels are evaluated according to the 2D Gaussian distribution law. In order to simulate the star image precisely and actually, the image smear has been considered. Based on the proposed star image simulation approach, the effects of image smear on star sensor recognition capability have been analyzed in different exposure time. © 2017 SPIE.


Xu Z.,National University of Defense Technology | Xu Z.,Hunan Key Laboratory of Videometrics and Vision Navigation | Shang Y.,National University of Defense Technology | Shang Y.,Hunan Key Laboratory of Videometrics and Vision Navigation | Ma X.,PLA Navy Submarine Academy
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2015

Satellite-rocket docking ring recognition method based on mathematical morphology is presented in this paper, according to the geometric and grayscale characteristics of the docking ring typical structure. The docking ring used in this paper is a circle with a cross in the middle. Most of spacecrafts are transported into orbit by rocket, and they retain the connection component with the rocket. The tracing spacecraft should capture the target spacecraft first before operating the target spacecraft. The docking ring is one of the typical parts of a spacecraft, and it can be recognized automatically. Thereby we can capture the spacecraft through the information of the docking ring. Firstly a multi-step mathematical morphology processing is applied to the image of the target spacecraft with different structure element, followed by edge detection and line detection, and finally docking ring typical structure is located in the image by relative geometry analysis. The images used in this paper are taken of real satellite in lab. The docking ring can be recognized when the distance between the two spacecraft is different. The results of physical simulation experiment show that the method in this paper can recognize docking ring typical structure accurately when the tracing spacecraft is approaching the target spacecraft. © 2015 SPIE.


Zhang X.,National University of Defense Technology | Zhang X.,Hunan Key Laboratory of Videometrics and Vision Navigation | Zhu Z.,National University of Defense Technology | Zhu Z.,Hunan Key Laboratory of Videometrics and Vision Navigation | And 9 more authors.
Optics and Lasers in Engineering | Year: 2012

Typically, optical measurement systems can achieve high accuracy over a limited volume, or cover a large volume with low accuracy. In this paper, we propose a universal way of integrating a camera with a theodolite to construct a theodolite-camera (TC) measurement system that can make measurements over a large volume with high accuracy. The TC inherits the advantages of high flexibility and precision from theodolite and camera, but it avoids the need to perform elaborate adjustments on the camera and theodolite. The TC provides a universal and flexible approach to the camera-on-theodolite system. We describe three types of TC based separately on: (i) a total station; (ii) a theodolite; and (iii) a general rotation frame. We also propose three corresponding calibration methods for the different TCs. Experiments have been conducted to verify the measuring accuracy of each of the three types of TC. © 2012 Elsevier Ltd.


Zhu Z.-K.,National University of Defense Technology | Zhu Z.-K.,Hunan Key Laboratory of Videometrics and Vision Navigation | Yuan Y.,National University of Defense Technology | Yuan Y.,Hunan Key Laboratory of Videometrics and Vision Navigation | And 2 more authors.
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2011

A novel measuring system, named Theodolite-camera Videometrics System (TVS) based on total station, has been introduced in this paper, and the concept of theodolite-camera which is the key component of TVS has been proposed, it consists of non-metric camera and rotation platform generally, and can rotate horizontally and vertically. TVS based on total station is free of field control points, and the fields of view of its theodolite-cameras are nonfixed, thus TVS is qualified for targets with wide moving range or big structure. Theodolite-camera model has been analyzed and presented in detail in this paper. The calibration strategy adopted has been demonstrated to be accurate and feasible by both simulated and real data, and TVS has also been proved to be a valid, reliable, precise measuring system, and living up to expectations. © 2011 SPIE.


Zhang H.,National University of Defense Technology | Zhang H.,Hunan Key Laboratory of Videometrics and Vision Navigation | Guo P.,National University of Defense Technology | Guo P.,Hunan Key Laboratory of Videometrics and Vision Navigation
International Conference on Signal Processing Proceedings, ICSP | Year: 2014

Attitude error compensation is difficult for marine strapdown inertial navigation system (SINS) because of the marine vehicle's moving or rocking. This paper proposes a method that compensates the marine SINS attitude error based on vision measurement. The relation between the vision level attitude and the sea-sky-line is deduced based on the geometry of vision. The sea-sky-line is detected from images to measure the level attitude of the marine vision system. By tracking a far reference object in images the vision azimuth change can be calculated. Using the vision level attitude and azimuth change observations, an extended-kalman-filter (EKF) is designed to estimate and compensate the SINS attitude error. Simulations of error compensation for rocking SINS were performed. Results show that the proposed method works well and after compensation the SINS attitude comes to a high precision. The method in this paper gives a feasible way for marine SINS attitude error compensation in alignment or navigation. © 2014 IEEE.


Shang Y.,National University of Defense Technology | Shang Y.,Hunan Key Laboratory of Videometrics and Vision Navigation | Liu J.,National University of Defense Technology | Liu J.,Hunan Key Laboratory of Videometrics and Vision Navigation | And 3 more authors.
Optik | Year: 2014

A monocular method to measure the pose of a translation-only one-dimensional (1D) object containing at least two known characteristic points is presented. The position of any other point on the 1D object can be located after the pose measurement. Scene information is not needed to apply the method. If the distance between the characteristic points is unknown, there is a scale factor between the object's real positions and the measurement results. The method can also be used to measure the pose and shape parameters of a parallelogram, prism or cylinder object. © 2014 Published by Elsevier GmbH.


Xia Y.,National University of Defense Technology | Xia Y.,Hunan Key Laboratory of Videometrics and Vision Navigation | Xiangyi S.,National University of Defense Technology | Xiangyi S.,Hunan Key Laboratory of Videometrics and Vision Navigation | And 3 more authors.
Optics Communications | Year: 2013

One step phase shifting method for Electronic speckle pattern interferometry (ESPI) is an effective way for dynamic phenomenon analysis. But all the present methods need prior processes to get the reference phase field either by the traditional phase shifting method or by the image-sequence analyzing method. This paper presents a direct phase estimating method using only two interferograms, with no need of any prior information. By reasonably assuming that the background, the modulation and the phase changes are constant in local areas, the whole phase change field modulated by π with sign ambiguity can be estimated. Then through analyzing the phase change values, the whole sign field can be determined. With the help of the sign field, the modulation of the phase changes can be removed by phase unwrapping methods. The effectiveness of this method has been proved by real experimental tests. © 2012 Elsevier B.V. All rights reserved.


Zhou J.,National University of Defense Technology | Zhang X.-H.,Hunan Key Laboratory of Videometrics and Vision Navigation
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2011

Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient. © 2011 SPIE.


Shang Y.,National University of Defense Technology | Shang Y.,Hunan Key Laboratory of Videometrics and Vision Navigation | Yu Q.,National University of Defense Technology | Yu Q.,Hunan Key Laboratory of Videometrics and Vision Navigation | And 6 more authors.
Optics and Lasers in Engineering | Year: 2014

A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system. © 2013 Elsevier Ltd.


Zhang X.,National University of Defense Technology | Zhang X.,Hunan Key Laboratory of Videometrics and Vision Navigation | Zhang Z.,Nanyang Technological University | Li Y.,National University of Defense Technology | And 6 more authors.
Applied Optics | Year: 2012

We address the model-to-image registration problem with line features in the following two ways. (a)We present a robust solution to simultaneously recover the camera pose and the three-dimensional-to-twodimensional line correspondences. With weak pose priors, our approach progressively verifies the pose guesses with a Kalman filter by using a subset of recursively found match hypotheses. Experiments show our method is robust to occlusions and clutter. (b) We propose a new line feature based pose estimation algorithm, which iteratively optimizes the objective function in the object space. Experiments show that the algorithm has strong robustness to noise and outliers and that it can attain very accurate results efficiently. © 2012 Optical Society of America.

Loading Hunan Key Laboratory of Videometrics and Vision Navigation collaborators
Loading Hunan Key Laboratory of Videometrics and Vision Navigation collaborators