Entity

Time filter

Source Type

United States

Cooper M.,FX Palo Alto Laboratory
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2013

Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text. © 2013 SPIE and IS&T.


Kumar J.,University of Maryland University College | Chen F.,FX Palo Alto Laboratory | Doermann D.,University of Maryland University College
Proceedings - International Conference on Pattern Recognition | Year: 2012

Images of document pages have different characteristics than images of natural scenes, and so the sharpness measures developed for natural scene images do not necessarily extend to document images primarily composed of text. We present an efficient and simple method for effectively estimating the sharp-ness/blurriness of document images that also performs well on natural scenes. Our method can be used to predict the sharpness in scenarios where images are blurred due to camera-motion (or hand-shake), defocus, or inherent properties of the imaging system. The proposed method outperforms the perceptually-based, no-reference sharpness work of [1] and [4], which was shown to perform better than 14 other no-reference sharpness measures on the LIVE dataset. © 2012 ICPR Org Committee.


Chandra S.,FX Palo Alto Laboratory
IEEE Transactions on Mobile Computing | Year: 2012

This paper describes the design and implementation of a file system-based distributed authoring system for campus-wide workgroups. We focus on documents for which changes by different group members are harder to automatically reconcile into a single version. Prior approaches relied on using group-aware editors. Others built collaborative middleware that allowed the group members to use traditional authoring tools. These approaches relied on an ability to automatically detect conflicting updates. They also operated on specific document types. Instead, our system relies on users to moderate and reconcile updates by other group members. Our file system-based approach also allows group members to modify any document type. We maintain one updateable copy of the shared content on each group member's node. We also hoard read-only copies of each of these updateable copies in any interested group member's node. All these copies are propagated to other group members at a rate that is solely dictated by the wireless user availability. The various copies are reconciled using the moderation operation; each group member manually incorporates updates from all the other group members into their own copy. The various document versions eventually converge into a single version through successive moderation operations. The system assists with this convergence process by using the made-with knowledge of all causal file system reads of contents from other replicas. An analysis using a long-term wireless user availability traces from a university shows the strength of our asynchronous and distributed update propagation mechanism. Our user space file system prototype exhibits acceptable file system performance. A subjective evaluation showed that the moderation operation was intuitive for students. © 2012 IEEE.


Chandra S.,FX Palo Alto Laboratory
IEEE Transactions on Learning Technologies | Year: 2011

The ability of lecture videos to capture the different modalities of a class interaction make them a good review tool. Multimedia capable devices are ubiquitous among contemporary students. Many lecturers are leveraging this popularity by distributing videos of lectures. They depend on the university to provide the video capture infrastructure. Some universities use trained videographers. Though they produce excellent videos, these efforts are expensive. Several research projects automate the video capture. However, these research prototypes are not readily deployable because of organizational constraints. Rather than waiting for the university to provide the necessary infrastructure, we show that instructors can personally capture the lecture videos using off-the-shelf components. Consumer grade high definition cameras and powerful personal computers allow instructor captured lecture videos to be as effective as the ones captured by the university. However, instructors will need to spend their own time on the various steps of the video capture workflow. They are also untrained in media capture; the capture mechanisms must be simple. Based on our experience in capturing lecture videos over three and a half years, we describe the technical challenges encountered in this endeavor. For instructors who accept the educational value of distributing lecture videos, we show that the effort required to capture and process the videos was modest. However, most existing campus storage and distribution options are unsuitable for the resource demands imposed by video distribution. We describe the strengths of several viable distribution alternatives. The instructors should work with the campus information technology personnel and design a distribution mechanism that considers the network location of the students. © 2011 IEEE.


Aumi M.T.I.,University of Washington | Kratz S.,FX Palo Alto Laboratory
MobileHCI 2014 - Proceedings of the 16th ACM International Conference on Human-Computer Interaction with Mobile Devices and Services | Year: 2014

Secure authentication with devices or services that store sensitive and personal information is highly important. However, traditional password and pin-based authentication methods compromise between the level of security and user experience. AirAuth is a biometric authentication technique that uses in-air gesture input to authenticate users. We evaluated our technique on a predefined (simple) gesture set and our classifier achieved an average accuracy of 96.6% in an equal error rate (EER-) based study. We obtained an accuracy of 100% when exclusively using personal (complex) user gestures. In a further user study, we found that AirAuth is highly resilient to video-based shoulder surfing attacks, with a measured false acceptance rate of just 2.2%. Furthermore, a longitudinal study demonstrates AirAuth's repeatability and accuracy over time. AirAuth is relatively simple, robust and requires only a low amount of computational power and is hence deployable on embedded or mobile hardware. Unlike traditional authentication methods, our system's security is positively aligned with user-rated pleasure and excitement levels. In addition, AirAuth attained acceptability ratings in personal, office, and public spaces that are comparable to an existing stroke-based on-screen authentication technique. Based on the results presented in this paper, we believe that AirAuth shows great promise as a novel, secure, ubiquitous, and highly usable authentication method.

Discover hidden collaborations