Automotive IT Platform Research Team

South Korea

Automotive IT Platform Research Team

South Korea
SEARCH FILTERS
Time filter
Source Type

Choi Y.W.,Automotive IT Platform Research Team | Choi J.W.,Korea Advanced Institute of Science and Technology | Lee S.G.,Yeungnam University
Journal of Institute of Control, Robotics and Systems | Year: 2015

This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps. © ICROS 2015.


Choi Y.W.,Automotive IT Platform Research Team | Choi K.S.,Ministry of Education | Choi J.W.,Korea Advanced Institute of Science and Technology | Lee S.G.,Yeungnam University
Journal of Institute of Control, Robotics and Systems | Year: 2014

This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous 360° panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System. © ICROS 2014.


Hong G.-S.,Sun Moon University | Kim B.-G.,Sun Moon University | Kwon K.-K.,Automotive IT Platform Research Team
International Journal of Distributed Sensor Networks | Year: 2014

Video sensor networking technologies have developed very rapidly in the last ten years. In this paper, a cross-based framework strategy for cost aggregation is presented for the depth map estimation based on video sensor networks. We formulate the process as a local regression problem consisting of two main steps with a pair of video sensors. The first step is to calculate estimates for a set of points within a shape-adaptive local support region. The second step is to aggregate the matching cost for the gradient-based weight of the support region at the outmost pixel. The proposed algorithm achieves strong results in an efficient manner using the two main steps. We have achieved improvement of up to 6.9%, 8.4%, and 8.3%, when compared with adaptive support weight (ASW) algorithm. Comparing to cross-based algorithm, the proposed algorithm gives 2.0%, 1.3%, and 1.0% in terms of nonocclusion, all, and discontinuities, respectively. © 2014 Gwang-Soo Hong et al.


Hong G.-S.,Sun Moon University | Kim B.-G.,Sun Moon University | Hwang Y.-S.,Sun Moon University | Kwon K.-K.,Automotive IT Platform Research Team
Multimedia Tools and Applications | Year: 2015

A convergence between a natural user interface (NUI) and advanced driver assistance system is considered as a next generation technology. This kind of interfacing system technology becomes more popular in driver assistance system of automobile. Especially, pedestrian detection is an important cue for intelligent vehicles and interactive driver assistance system. In this paper, we propose a pedestrian detection feature and technique by combining histogram of the oriented gradient (HOG) and discrete wavelet transform (DWT). In the method, the magnitude of motion is used to set region of interest (ROI) for improving detection speed. Then, we employ multi-feature for a pedestrian detection based on the HOG and DWT. In last stage, to classify whether a candidate window contains a pedestrian or not, the designed multi-feature is learned by using the training data with the support vector machine (SVM) mechanism. Experimental results show that the proposed algorithm increases the speed-up factor of 27.21 % by comparing to the existing method using the original HOG feature. © 2015 Springer Science+Business Media New York


Choi Y.W.,Automotive IT Platform Research Team | Kim J.U.,Automotive IT Platform Research Team | Choi J.W.,Korea Advanced Institute of Science and Technology | Lee S.G.,Yeungnam University
Journal of Institute of Control, Robotics and Systems | Year: 2013

This paper proposes a novel formation algorithm of identical robots based on object tracking method using omnidirectional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multirobots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have 360°of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment. © ICROS 2013.


Goswami K.,Sun Moon University | Lee J.-H.,Sun Moon University | Jang K.-S.,Sun Moon University | Kim B.-G.,Sun Moon University | Kwon K.-K.,Automotive IT Platform Research Team
Journal of Real-Time Image Processing | Year: 2014

Abstract: Recently, high-efficiency video coding (HEVC) has been developed as a new video coding standard focusing on the coding of ultrahigh definition videos as high-resolution and high-quality videos are getting more popular. However, one of the most important challenges in this new standard is its encoding time complexity. Due to this it is quite difficult to implement the HEVC encoder as a real-time system. In this paper, we have addressed this problem in a new way. Generally, for a natural video sequence good amount of coding blocks are “skip” in nature, which need not be transmitted and can be generated in the decoder side using the reference pictures. In this paper, we propose an early skip detection technique for the HEVC. Our proposed method is based on identifying the motionless and homogeneous regions in a video sequence. Moreover, a novel entropy difference-based calculation is proposed in this paper which can predict the skip coding blocks more accurately in a natural video sequence. The experimental result shows our proposed technique can achieve more than 30 % encoding time reduction than the conventional HEVC encoder with negligible degradation in video quality. © 2014 Springer-Verlag Berlin Heidelberg

Loading Automotive IT Platform Research Team collaborators
Loading Automotive IT Platform Research Team collaborators