United States
United States

Time filter

Source Type

Parolin A.,University of the Rio dos Sinos Valley | Fickel G.P.,Federal University of Rio Grande do Sul | Jung C.R.,Federal University of Rio Grande do Sul | Malzbender T.,HP Labs Palo Alto | Samadani R.,HP Labs Palo Alto
Proceedings - IEEE International Conference on Multimedia and Expo | Year: 2011

This paper presents a new bilayer video segmentation algorithm focusing on videoconferencing application. A face tracking algorithm is used to guide a generic -shaped template of the head and shoulders. A region of interest (ROI) is created around the generic template, and an energy function based on edge, color and motion cues is used to define the boundary between the person and the background. Our experimental results indicate that the silhouettes can be effectively extracted in common videoconferencing scenarios. © 2011 IEEE.


Wu S.,Hp Labs Palo Alto | Raschid L.,University of Maryland University College
WSDM 2014 - Proceedings of the 7th ACM International Conference on Web Search and Data Mining | Year: 2014

Microblogs such as Twitter support a rich variety of user interactions using hashtags, urls, retweets and mentions. Microblogs are an exemplar of a hybrid network; there is an explicit network of followers, as well as an implicit network of users who retweet other users, and users who mention other users. These networks are important proxies for influence. In this paper, we develop a comprehensive behavioral model of an individual user and her interactions in the hybrid network. We choose a focal user and predict those users who will be influenced by her, and will retweet and/or mention the focal user, in the near future. We define a potential function, based on a hybrid network, which reflects the likelihood of a candidate user being influenced by, and having a specific type of link to, a focal user, in the future. We show that the potential function based prediction model converges to the Bonacich centrality metric. We develop a fast unsupervised solution which approximates the future hybrid network and the future Bonacich potential. We perform an extensive evaluation over a microblog network and a stream of tweets from Twitter. Our solution outperforms several baseline methods including ones based on singular value decomposition (SVD) and a supervised Ranking SVM. © 2014 ACM.


Graefe G.,HP Labs Palo Alto | Volos H.,HP Labs Palo Alto | Kimura H.,HP Labs Palo Alto | Kuno H.,HP Labs Palo Alto | And 3 more authors.
Proceedings of the VLDB Endowment | Year: 2014

When a working set fits into memory, the overhead imposed by the buffer pool renders traditional databases noncompetitive with in-memory designs that sacrifice the benefits of a buffer pool. However, despite the large memory available with modern hardware, data skew, shifting workloads, and complex mixed workloads make it diffcult to guarantee that a working set will fit in memory. Hence, some recent work has focused on enabling in-memory databases to protect performance when the working data set almost fits in memory. Contrary to those prior efforts, we enable buffer pool designs to match in-memory performance while supporting the "big data" workloads that continue to require secondary storage, thus providing the best of both worlds. We introduce here a novel buffer pool design that adapts pointer swizzling for references between system objects (as opposed to application objects), and uses it to practically eliminate buffer pool overheads for memoryresident data. Our implementation and experimental evaluation demonstrate that we achieve graceful performance degradation when the working set grows to exceed the buffer pool size, and graceful improvement when the working set shrinks towards and below the memory and buffer pool sizes. © 2014 VLDB Endowment.


Yang S.,Hp Labs China | Jin J.,Hp Labs China | Parag J.,HP Labs Palo Alto | Liu S.,HP Labs Palo Alto
DocEng2010 - Proceedings of the 2010 ACM Symposium on Document Engineering | Year: 2010

Advertisements provide the necessary revenue model supporting the Web ecosystem and its rapid growth. Targeted or contextual ad insertion plays an important role in optimizing the financial return of this model. Nearly all the current ad payment strategies such as "pay-per-impression" and "pay-per-click" on web pages are geared for electronic viewing purposes. Little attention, however, is focused on deriving additional ad revenues when the content is repurposed for alternative mean of presentation, e.g. being printed. Although more and more content is moving to the Web, there are still many occasions where printed output of web content or RSS feeds is desirable, such as maps and articles; thus printed ad insertion can potentially be lucrative. In this paper, we describe a cloud-based printing service that enables automatic contextual ad insertion, with respect to the main web page content, when a printout of the page is requested. To encourage service utilization, it would provide higher quality printouts than what is possible from current browser print drivers, which generally produce poor outputs - ill formatted pages with lots of unwanted information, e.g. navigation icons. At this juncture we will limit the scope to only article-related web pages although the concept can be extended to arbitrary web pages. The key components of this system include (1) automatic extraction of article from web pages, (2) the ad service network for ad matching and delivering, and (3) joint content and ad printout creation. Copyright 2010 ACM.


Lim S.H.,HP Labs Palo Alto | Zheng L.,HP Labs China | Jin J.,HP Labs China | Hou H.,China HP Co Ltd. | And 2 more authors.
DocEng2010 - Proceedings of the 2010 ACM Symposium on Document Engineering | Year: 2010

The user experience of printing web pages has not been very good. Web pages typically contain contents that are not printworthy or informative such as side bars, footers, headers, advertisements, and auxiliary information for further browsing. Since the inclusion of such contents degrades the web printing experience, we have developed a tool that first selects the main part of the web page automatically and then allows users to make adjustments. In this paper, we describe the algorithm for selecting the main content automatically during the first pass. The web page is first segmented into several coherent areas or blocks using our web page segmentation method that clusters content based on the affinity values between basic elements. The relative importance values for the segmented blocks are computed using various features and the main content is extracted based on the constraint of one DOM (Document Object Model) sub-tree and high important scores. We evaluated our algorithm on 65 web pages and computed the accuracy based on area of overlap between the ground truth and the extracted result of the algorithm. Copyright 2010 ACM.


Bartolini C.,University of Ferrara | Bartolini C.,HP Labs Palo Alto | Stefanelli C.,University of Ferrara
Proceedings of the 12th IFIP/IEEE International Symposium on Integrated Network Management, IM 2011 | Year: 2011

Business-driven IT management (BDIM) aims at ensuring successful alignment of business and IT through thorough understanding of the impact of IT on business processes and business results, and vice versa. This thesis reviews the state of the art of BDIM research and advances it by contributing a comprehensive BDIM solution for the process of IT incident management. The solution can be used as a template for applying the BDIM methodology to other IT service management processes. The work presented in this dissertation resulted in three patent applications and is at the core of the HP IT Analytic™ product (formerly HP D ecisionCenter™). This thesis was defended on March 2009. © 2011 IEEE.


Fickel G.P.,Federal University of Rio Grande do Sul | Jung C.R.,Federal University of Rio Grande do Sul | Samadani R.,HP Labs Palo Alto | Malzbender T.,HP Labs Palo Alto
Proceedings - International Conference on Image Processing, ICIP | Year: 2012

In this paper we propose a new disparity map estimation algorithm from multiple rectified images. The reference image is initially segmented into triangular regions, and each triangle is assigned to an initial disparity, leading to a piece-wise constant disparity map. Then, a refinement step is applied, where the disparity of the triangle vertices is adjusted aiming to impose spatial consistency to the disparity map, smoothing the map within the objects and keeping the discontinuities between them. The final result is a piece-wise linear depth map on a triangular domain that can be explored for view synthesis. © 2012 IEEE.


Fuhr G.,Federal University of Rio Grande do Sul | Fickel G.P.,Federal University of Rio Grande do Sul | Dal'Aqua L.P.,Federal University of Rio Grande do Sul | Jung C.R.,Federal University of Rio Grande do Sul | And 2 more authors.
2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings | Year: 2013

Stereo matching has a long history in image processing and computer vision. In fact, there are inumerous approaches reported in the literature, and quantitative evaluation is usually performed by comparing the obtained disparity maps with ground truth data (using the MSE, for instance). One important application of stereo matching is view interpolation, where it is desired to produce a new synthetic view from (at least) a pair of images and the corresponding disparity maps. In view interpolation, evaluation is mostly qualitative (visual quality of the synthesized image), and quantitative approaches compute objective similarity metrics between the synthesized image and the actual image at the same position (e.g. PSNR). The main goal of this paper is to evaluate the impact of several different stereo matching algorithms in a view interpolation context, relating the quality of the disparity maps with the quality of the corresponding synthesized views using standardized datasets. In this paper, experiments using the MPEG reference software for view interpolation and more than twenty datasets are presented and discussed. Our results indicate that the use of the common percentage of bad pixels as a metric for stereo matching methods does not translate well to the quality of view interpolation. © 2013 IEEE.


Fan J.,HP Labs Palo Alto | Luo P.,HP Labs China | Lim S.H.,HP Labs Palo Alto | Liu S.,HP Labs Palo Alto | And 2 more authors.
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining | Year: 2011

Many people use the Web as the main source of information in their daily lives. However, most web pages contain non-informative components such as side bars, footers, headers, and advertisements, which are undesirable for certain applications like printing. We demonstrate a system that automatically extracts the informative contents from news- and blog-like web pages. In contrast to many existing methods that are limited to identifying only the text or the bounding rectangular region, our system not only identifies the content but also the structural roles of various content components such as title, paragraphs, images and captions. The structural information enables re-layout of the content in a pleasing way. Besides the article text extraction, our system includes the following components: 1) print-link detection to identify the URL link for printing, and to use it for more reliable analysis and recognition; 2) title detection incorporating both visual cues and HTML tags; 3) image and caption detection utilizing extensive visual cues; 4) multiple-page and next page URL detection. The performance of our system has been thoroughly evaluated using a human labeled ground truth dataset consisting of 2000 web pages from 100 major web sites. We show accurate results using such a dataset. Copyright 2011 ACM.


Lukowicz P.,University of Passau | Baker M.G.,HP Labs Palo Alto | Paradiso J.,Massachusetts Institute of Technology
IEEE Pervasive Computing | Year: 2010

Pervasive computing technology can save lives by both eliminating the need for humans to work in hostile environments and supporting them when they do. In general, environments that are hazardous to humans are hard on technology as well. This issue contains three articles and a Spotlight column that illustrate the challenges of designing this technology and implementing it in hostile environments. © 2010 IEEE.

Loading HP Labs Palo Alto collaborators
Loading HP Labs Palo Alto collaborators