Time filter

Source Type

Yao L.,National University of Defense Technology | Yao L.,Hunan Provincial Key Laboratory of Image measurement and Vision Navigation | Shang Y.,National University of Defense Technology | Shang Y.,Hunan Provincial Key Laboratory of Image measurement and Vision Navigation | Fu D.,China Academy of Launch Vehicle Technology
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2016

When videogammetry (optical measurement) was carried outdoor or under cruel indoor circumstance, the results would be inevitably affected by the atmosphere turbulence. As a result, the precision of surveying was destroyed. The field of air turbulence's impact on optical measurement was neglected by scholars for a long time, the achievements massed about laser optics and optical communications. The mostly adapted method was noise filtration when the pixel wandering could not be rejected in engineering application, which got little improvement on usual conditions. The principle of influence under atmospheric turbulence on optical measurement is presented in this paper. And experiments data and applications are carried out to announce the impact of atmospheric turbulence. Combining with relevant researches, some essential issues and expectations of the atmospheric turbulence research are proposed. © 2016 SPIE.


Liu H.,National University of Defense Technology | Liu H.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | Zhu Z.,Beijing Institute of Technology | Yao L.,National University of Defense Technology | And 9 more authors.
Optics and Lasers in Engineering | Year: 2016

3D metrology of a stereovision system requires epipolar rectification to be performed before dense stereo matching. In this study, we propose an epipolar rectification method for a stereovision system with two telecentric lens-based cameras. Given the orthographic projection matrices of each camera, the new projection matrices are computed by determining the new camera coordinates system in affine space and imposing some constraints on the intrinsic parameters. Then, the transformation that maps the old image planes on to the new image planes is achieved. Experiments are performed to validate the performance of the proposed rectification method. The test results show that the perpendicular distance and 3D reconstructed deviation obtained from the rectified images is not significantly higher than the corresponding values obtained from the original images. Considering the roughness of the extracted corner points and calibrated camera parameters, we can conclude that the proposed method can provide sufficiently accurate rectification results. © 2016 Elsevier Ltd. All rights reserved.


Zhang Y.,National University of Defense Technology | Zhang Y.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | Li X.,National University of Defense Technology | Li X.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | And 4 more authors.
IET Computer Vision | Year: 2016

In this study, the authors have proposed a new solution for the problem of pose estimation from a set of matched 3D model and 2D image lines. Traditional line-based pose estimation methods utilising the finite information of the observations are based on the assumption that the noises for the two endpoints of the image line segment are statistically independent. However, in this study, the authors prove that these two noises are negatively correlative when the image line segment is fitted by the least-squares technique from the noisy edge points. Moreover, the authors derive the noise model describing the probabilistic relationship between the 3D model line and their finite image observations. Based on the proposed noise model, the maximum-likelihood approach is exploited to estimate the pose parameters. The authors have carried out synthetic experiments to compare the proposed method to other pose optimisation methods in the literature. The experimental results show that the proposed methods yield a clear higher precision than the traditional methods. The authors also use real image sequences to demonstrate the performance of the proposed method. © The Institution of Engineering and Technology.


Zhou L.-M.,National University of Defense Technology | Zhou L.-M.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | Zhang X.-H.,National University of Defense Technology | Zhang X.-H.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | And 2 more authors.
Zidonghua Xuebao/Acta Automatica Sinica | Year: 2015

There is a small size, lack of or less texture, transparent and reflective phenomenon on the images of convex objects. So the traditional reconstruction methods such as stereo-vision matching and active vision scanning such as laser or structure light cannot be used for such objects. In this paper, a reconstruction method based on multi-contours for miniature convex object is proposed. Firstly, multi-view silhouette images are captured, then the accurate contours are extracted; secondly coarse mesh is generated by space carving of multi-contours, and thirdly, the fragmented polygons are merged according to angle and area constriants; at last, the accurate mesh is generated by space carving of multi-polygons of the coarse mesh. The reconstruction experiments of regular and convex objects (syringe needle, diameter is about 3 mm) proved that the angular error is less than 0.7° and the time cost is less than 15 seconds and the total process is without manual intervention. This method can solve the problem of reconstruction and vision measurement of convex objects effectively. Copyright © 2015 Acta Automatica Sinica. All rights reserved.


Zhang Y.,National University of Defense Technology | Zhang Y.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | Zhou L.,National University of Defense Technology | Zhou L.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | And 4 more authors.
Journal of Sensors | Year: 2016

In order to make the general user take vision tasks more flexibly and easily, this paper proposes a new solution for the problem of camera calibration from correspondences between model lines and their noisy image lines in multiple images. In the proposed method the common planar items in hand with the standard size and structure are utilized as the calibration objects. The proposed method consists of a closed-form solution based on homography optimization, followed by a nonlinear refinement based on the maximum likelihood approach. To automatically recover the camera parameters linearly, we present a robust homography optimization method based on the edge model by redesigning the classic 3D tracking approach. In the nonlinear refinement procedure, the uncertainty of the image line segment is encoded in the error model, taking the finite nature of the observations into account. By developing the new error model between the model line and image line segment, the problem of the camera calibration is expressed in the probabilistic formulation. Simulation data is used to compare this method with the widely used planar pattern based method. Actual image sequences are also utilized to demonstrate the effectiveness and flexibility of the proposed method. © 2016 Yueqiang Zhang et al.


Zhang Y.,National University of Defense Technology | Zhang Y.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | Li X.,National University of Defense Technology | Li X.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | And 6 more authors.
Science China Technological Sciences | Year: 2015

In this paper, new solutions for the problem of pose estimation from correspondences between 3D model lines and 2D image lines are proposed. Traditional line-based pose estimation methods rely on the assumption that the noises (perpendicular to the line) for the two endpoints are statistically independent. However, these two noises are in fact negatively correlated when the image line segment is fitted using the least-squares technique. Therefore, we design a new error function expressed by the average integral of the distance between line segments. Three least-squares techniques that optimize both the rotation and translation simultaneously are proposed in which the new error function is exploited. In addition, Lie group formalism is utilized to describe the pose parameters, and then, the optimization problem can be solved by means of a simple iterative least squares method. To enhance the robustness to outliers existing in the match data, an M-estimation method is developed to convert the pose optimization problem into an iterative reweighted least squares problem. The proposed methods are validated through experiments using both synthetic and real-world data. The experimental results show that the proposed methods yield a clearly higher precision than the traditional methods. © 2015 Science China Press and Springer-Verlag Berlin Heidelberg


Zhang Y.,National University of Defense Technology | Zhang Y.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | Zhou L.,National University of Defense Technology | Zhou L.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | And 6 more authors.
Pattern Recognition | Year: 2015

This paper proposes a homograhpy estimation algorithm that uses contour model to track and locate the texture-less object in high clutter environments. The proposed method consists of a homography recognition with the correspondences between model and image lines unknown, followed by a nonlinear homography optimization based on the maximum likelihood criterion. In our approach, the initial estimation of the homography is obtained in the framework of RANSAC-like, which contains two stages: hypothesizing and verifying. The first stage generates a number of homograhpy hypotheses from the correspondences of the quadrangle-like structures. Next, the homography hypotheses are quickly ranked according to a redefined distance function. In the refinement procedure, the model sample points are projected into the image plane. After that, 1D search is utilized along the normal direction to obtain the corresponding image point for each model sample point. Finally, the optimized homography is obtained by minimizing the errors between the sample points and their corresponding image points. Experiments show that the proposed method performs robust homography recognition of texture-less planar objects and maintains accurate and stable homography estimation in the cluttered environments as well as the cases of extreme slant angles. © 2015 Elsevier Ltd.


Zhang Y.,National University of Defense Technology | Zhang Y.,Hunan Provincial Key Laboratory of Image measurement and Vision Navigation | Zhou L.,National University of Defense Technology | Zhou L.,Hunan Provincial Key Laboratory of Image measurement and Vision Navigation | And 4 more authors.
Guangxue Xuebao/Acta Optica Sinica | Year: 2015

A method based on contour model tracking to estimate the homograhpy for the textureless object in the clutter scene is proposed. The initial estimation of the transformation is obtained in the framework of random sample consensus (RANSAC). The optimized homography solution is obtained by minimizing the normal distance. To calculate the initial transformation quickly and robustly, random three line segments conforming to the certain geometry constraint are utilized to solve the assumptive transformation relation. The transformation relation with the minimal errors are picked out as the homographic initial value. To overcome the issue that the mismatches of the model and image lines in the complicated background may lead to the failure of the homography optimization, in the process of matches of the model and image points, multiple image points matches are retained for each model sample point. In the weighting process for sample point, due to the use of the property of the sample point as well as the relation to the neighbor points, the robustness of method is enhanced effectively. Experimental results show that the proposed method can realize the optimized solution of homography in the complicated background. Contrast with the traditional method, the proposed method can overcome the influence induced by the complex background effectively and optimize the homography parameters for textureless objects successfully. © 2015, Chinese Optical Society. All right reserved.


Zhang Y.,National University of Defense Technology | Zhang Y.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | Su A.,National University of Defense Technology | Su A.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | And 6 more authors.
Guangxue Xuebao/Acta Optica Sinica | Year: 2015

To track and estimate the pose and position of known rigid objects efficiently in complex environment, a method coupled three dimensional (3D) particle filter (PF) framework with M-estimation optimization in a closed loop is proposed. A novel similarity observation model is constructed based on multilevel line representation; line correspondences between 3D model edges and two dimensional (2D) image line segments are received easily based on the tracking state of PF. After that, line correspondences are provided for M-estimation to optimize the pose and position of objects. The optimized particles are fused into the particle filter framework according to the importance sampling theory. Moreover, to speed up the proposed method, line detection and search space is limited in a local region of interest (ROI) predicted by PF. Experiments show that the proposed method can effectively track and accurately estimate the pose of freely moving objects in unconstrained environment. Comparisons on synthetic and real images demonstrate that proposed method greatly outperforms the state-of-art method in accuracy and efficiency. ©, 2015, Chinese Optical Society. All right reserved.


Sun X.,National University of Defense Technology | Sun X.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | Long G.,National University of Defense Technology | Long G.,Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation | And 3 more authors.
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2015

This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible. © 2015 SPIE.

Loading Hunan Provincial Key Laboratory of Image measurement and Vision Navigation collaborators
Loading Hunan Provincial Key Laboratory of Image measurement and Vision Navigation collaborators