Time filter

Source Type

San Jose, CA, United States

Zhou L.,Key Laboratory of Broadband Wireless Communication and Sensor Network Technology | Wang H.,TCL Research America | Guizani M.,Qatar University
IEEE Transactions on Communications

In this work, we investigate the impact of mobility on video streaming over multi-hop wireless networks by utilizing a class of scheduling schemes. We show that node spatial mobility has the ability to improve video quality and reduce the transmission delay without the help of advanced video coding techniques. To describe a practical mobile scenario, we consider a random walk mobility model in which each node can randomly and independently choose its mobility direction, and all the nodes can identically and uniformly visit the entire network. The contributions of this work are twofold: 1) It studies the optimal node velocity for the mobile video system. In this case, it is possible to achieve almost constant transmission delay and video quality as the number of nodes increases; 2) It derives an achievable quality-delay tradeoff range for different node velocities. Therefore, it is helpful to shed insights on network design and fundamental guidelines on establishing an efficient mobile video transmission system. © 1972-2012 IEEE. Source

Fleites F.C.,Senzari Inc. | Wang H.,TCL Research America | Chen S.-C.,Florida International University
IEEE Transactions on Multimedia

Smart TVs have realized the convergence of TV, Internet, and PC technologies, but still do not provide a seamless content interaction for TV-enabled shopping. To purchase interesting items displayed in a TV show, consumers must resort to a store or the Web, which is an inconvenient way of purchasing products. The fundamental challenge in realizing such a use case consists of understanding the multimedia content being streamed. Such a challenge can be realized by utilizing object detection to facilitate content understanding though it has to be executed as a computationally bound process so that consumers are provided with a responsive and exciting user interface. To this end, we propose a computational-and temporal-aware multimedia abstraction framework that facilitates the efficient execution of object detection tasks. Given computational and temporal rate constraints, the proposed framework selects the optimal video frames that best represent the video content and allows the execution of the object detection task as a computationally bound process. In this sense, the framework is computationally scalable as it can adapt to the given constraints and generate optimal abstraction results accordingly. Additionally, the framework utilizes 'object views' as the basis for the frame selection process, which depict salient information and are represented as regions of interest (ROI). In general, an ROI can be a whole frame or a region that discards background information. Experimental results demonstrate the computational scalability of the proposed framework and the benefits of using the regions of interest as the basis of the abstraction process. © 1999-2012 IEEE. Source

Zhu Q.,University of Miami | Shyu M.-L.,University of Miami | Wang H.,TCL Research America
Proceedings - 2013 IEEE International Symposium on Multimedia, ISM 2013

Most video recommender systems limit the content to the metadata associated with the videos, which could lead to poor results since metadata is not always available or correct. Meanwhile, the visual information of videos is typically not fully explored, which is especially important for recommending new items with limited metadata information. In this paper, a novel content-based video recommendation framework, called Video Topic, that utilizes a topic model is proposed. It decomposes the recommendation process into video representation and recommendation generation. It aims to capture user interests in videos by using a topic model to represent the videos, and then generates recommendations by finding those videos that most fit to the topic distribution of the user interests. Experimental results on the Movie Lens dataset validate the effectiveness of Video Topic by evaluating each of its components and the whole framework. © 2013 IEEE. Source

Li H.,University of Vermont | Wu X.,University of Vermont | Li Z.,TCL Research America | Ding W.,University of Massachusetts Boston
Proceedings - IEEE International Conference on Data Mining, ICDM

Group feature selection makes use of structural information among features to discover a meaningful subset of features. Existing group feature selection algorithms only deal with pre-given candidate feature sets and they are incapable of handling streaming features. On the other hand, feature selection algorithms targeted for streaming features can only perform at the individual feature level without considering intrinsic group structures of the features. In this paper, we perform group feature selection with streaming features. We propose to perform feature selection at the group and individual feature levels simultaneously in a manner of a feature stream rather than a pre-given candidate feature set. In our approach, the group structures are fully utilized to reduce the cost of evaluating streaming features. We have extensively evaluated the proposed method. Experimental results have demonstrated that our proposed algorithms statistically outperform state-of-the-art methods of feature selection in terms of classification accuracy. © 2013 IEEE. Source

Yang Y.,Florida International University | Fleites F.C.,Florida International University | Wang H.,TCL Research America | Chen S.-C.,Florida International University
Proceedings - 2013 IEEE International Symposium on Multimedia, ISM 2013

In this paper we propose a novel framework for object retrieval based on automatic foreground object extraction and multi-layer information integration. Specifically, user interested objects are firstly detected from unconstrained videos via a multimodal cues method, then an automatic object extraction algorithm based on Grab Cut is applied to separate foreground object from background. The object-level information is enhanced during the feature extraction layer by assigning different weights to foreground and background pixels respectively, and the spatial color and texture information is integrated during the similarity calculation layer. Experimental results on both benchmark data set and real-world data set demonstrate the effectiveness of the proposed framework. © 2013 IEEE. Source

Discover hidden collaborations