Time filter

Source Type

Yuan H.,Shandong University | Liu J.,Shandong University | Liu J.,Digital Multi Media Technology Co. | Liu J.,Nanjing Southeast University | And 3 more authors.
IEEE Transactions on Multimedia | Year: 2012

Zoom motion is classified into two categories, i.e., global zoom motion and local zoom motion. A simple affine motion model with four parameters is utilized to describe zoom motion efficiently based on the analyses of camera imaging principles. Based on the motion model, a basic candidate motion vector (BCMV) of a block could be derived when model parameters are confirmed. Then a set of candidate motion vectors (CMVs) could be obtained by modifying the BCMV. Thereafter, template matching is used to choose the optimal CMV (OCMV). Finally, the block is coded with the optimal CMV as an independent mode, and a rate distortion (RD) criterion is used to determine whether to use the mode or not. Experimental results demonstrate that by implementing the proposed method into Key Technology Area test platform version 2.6r1 (KTA2.6r1), a maximum -21.99% and average -8.72% bit rate savings can be achieved for videos involving zoom motion, while maintaining the same quality (evaluated by PSNR) of reconstructed videos when IPPPP coding structure is used. When Hierarchical B coding structure is employed, the maximum and average bit rate savings are - 10.48% and - 5.575% when the qualities of reconstructed videos remain unchanged. Besides, for videos involving camera rotation, translation, etc., an average -2.04% bit rate savings could also be achieved; while for videos containing common motions, an average -1.19% bit rate saving could be achieved at the same quality of reconstructed videos. © 2012 IEEE.


Yuan H.,Shandong University | Yuan H.,Xidian University | Yuan H.,Digital Multi Media Technology Company | Chang Y.,Xidian University | And 3 more authors.
IEEE Transactions on Circuits and Systems for Video Technology | Year: 2011

In 3-D video coding, texture videos and depth maps need to be jointly coded. The distortion of texture videos and depth maps can be propagated to the synthesized virtual views. Besides coding efficiency of texture videos and depth maps, joint bit allocation between texture videos and depth maps is also an important research issue in 3-D video coding. First, we present comprehensive analyses on the impacts of the compression distortion of texture videos and depth maps on the quality of the virtual views, and then derive a concise distortion model for the synthesized virtual views. Based on this model, the joint bit allocation problem is formulated as a constrained optimization problem, and is solved by using the Lagrangian multiplier method. Experimental results demonstrate the high accuracy of the derived distortion model. Meanwhile, the rate-distortion (R-D) performance of the proposed algorithm is close to those of search-based algorithms which can give the best R-D performance, while the complexity of the proposed algorithm is lower than that of search-based algorithms. Moreover, compared with the bit allocation method using fixed texture and depth bits ratio (5:1), a maximum 1.2 dB gain can be achieved by the proposed algorithm. © 2006 IEEE.


Li X.,Shandong University | Liu J.,Shandong University | Liu J.,Digital Multi Media Technology Co. | Sun J.,Shandong University | And 2 more authors.
IET Information Security | Year: 2011

Quantisation index modulation (QIM) is an important class of watermarking methods, which has been widely used in blind watermarking applications. It is well known that spread transform dither modulation (STDM), as an extension of QIM, has good performance in robustness against random noise and re-quantisation. However, the quantisation step-sizes used in STDM are random numbers not taking features of the image into account. The authors present a step projection-based approach to incorporate the perceptual model with STDM framework. Four implementations of the proposed algorithm are further presented according to different modified versions of the perceptual model. Experimental results indicate that the step projection-based approach can incorporate the perceptual model with STDM framework in a better way, thereby providing a significant improvement in image fidelity. Compared with the former proposed modified schemes of STDM, the author's best performed implementation provides powerful resistance against common attacks, especially in robustness against Gauss noise, salt and pepper noise and JPEG compression. © 2011 The Institution of Engineering and Technology.


Wang L.,Shandong University | Liu J.,Shandong University | Liu J.,Digital Multi Media Technology Co. | Sun J.,Shandong University | And 3 more authors.
3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2011 - Proceedings | Year: 2011

Virtual view synthesis has been considered as a crucial technique in three-dimensional television (3DTV) display, where depth-image-based rendering (DIBR) is a key technology. In order to improve the virtual image quality, a method without preprocessing the depth image is proposed. During the synthesis, the hole-flag map is fully utilized. A Horizontal, Vertical and Diagonal Extrapolation (HVDE) using depth information algorithm is also proposed for filling the tiny cracks. After blending, main virtual view image is obtained. Then the image generated by filtering the depth image is regarded as assistant image to fill small holes in the main image. Experimental results show that the proposed method can obtain better performance in both subjective quality and objective evaluation. © 2011 IEEE.


Yang X.,Shandong University | Yang X.,Digital Multi Media Technology Co. | Liu J.,Shandong University | Liu J.,Digital Multi Media Technology Co. | And 4 more authors.
3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2011 - Proceedings | Year: 2011

We propose an effective virtual view synthesis approach, which utilizes the technology of depth-image-based rendering (DIBR). In our scheme, two reference color images and their associated depth maps are used to generate the arbitrary virtual viewpoint. Firstly, the main and auxiliary viewpoint images are warped to the virtual viewpoint. After that, the cracks and error points are removed to enhance the image quality. Then, we complement the disocclusions of the virtual viewpoint image warped from the main viewpoint with the help of the auxiliary viewpoint. In order to reduce the color incontinuity of the virtual view, the brightness of the two reference viewpoint images are adjusted. Finally, the holes are filled by the depth-assistance asymmetric dilation inpainting method. Simulations show that the view synthesis approach is effective and reliable in both of subjective and objective evaluations. © 2011 IEEE.


Yu F.,Shandong University | Liu J.,Shandong University | Liu J.,Digital Multi Media Technology Co. | Ren Y.,Shandong University | And 3 more authors.
3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2011 - Proceedings | Year: 2011

An efficient depth map generation method is presented for static scenes with moving objects. Firstly, static background scene is reconstructed. Depth map of the reconstructed static background scene is extracted by linear perspective. Then, moving objects are segmented precisely. Depth values are assigned to the segmented moving objects according to their positions in the static scene. Finally, the depth values of the static background scene and the moving objects are integrated into one depth map. Experimental results show that the proposed method can generate smooth and reliable depth maps. © 2011 IEEE.


Wang L.,Shandong University | Liu J.,Shandong University | Liu J.,Digital Multi Media Technology Co. | Sun J.,Shandong University | And 2 more authors.
International Conference on Communication Technology Proceedings, ICCT | Year: 2011

For each view, the color and depth information are essential to consitute the elements of 3DTV. But in real situations, not all the depth cameras can provide color information. Therefore, propagating color information from reference view to the view of depth camera is important. Depth-Image-Based Rendering (DIBR) has been considered as a crucial technology to create virtual view. However, it still has some problems. In order to improve the quality of virtual image, a method without pre-processing the depth image is proposed. Only one reference view is taken use of, and a novel way to calculate priorities of points in 3D image warping is proposed. After eliminating tiny cracks through HVDE algorithm, a linear background texture filling method is applied to fill the holes. Experimental results show that the proposed method can gain better performance in both subjective quality and objective evaluation. © 2011 IEEE.

Loading Digital Multi Media Technology Co. collaborators
Loading Digital Multi Media Technology Co. collaborators