Boston, MA, United States
Boston, MA, United States

Time filter

Source Type

Bagheri A.,Isfahan University of Technology | Saraee M.,University of Salford | De Jong F.,INTERACTION MEDIA GROUP
Knowledge-Based Systems | Year: 2013

With the rapid growth of user-generated content on the internet, automatic sentiment analysis of online customer reviews has become a hot research topic recently, but due to variety and wide range of products and services being reviewed on the internet, the supervised and domain-specific models are often not practical. As the number of reviews expands, it is essential to develop an efficient sentiment analysis model that is capable of extracting product aspects and determining the sentiments for these aspects. In this paper, we propose a novel unsupervised and domain-independent model for detecting explicit and implicit aspects in reviews for sentiment analysis. In the model, first a generalized method is proposed to learn multi-word aspects and then a set of heuristic rules is employed to take into account the influence of an opinion word on detecting the aspect. Second a new metric based on mutual information and aspect frequency is proposed to score aspects with a new bootstrapping iterative algorithm. The presented bootstrapping algorithm works with an unsupervised seed set. Third, two pruning methods based on the relations between aspects in reviews are presented to remove incorrect aspects. Finally the model employs an approach which uses explicit aspects and opinion words to identify implicit aspects. Utilizing extracted polarity lexicon, the approach maps each opinion word in the lexicon to the set of pre-extracted explicit aspects with a co-occurrence metric. The proposed model was evaluated on a collection of English product review datasets. The model does not require any labeled training data and it can be easily applied to other languages or other domains such as movie reviews. Experimental results show considerable improvements of our model over conventional techniques including unsupervised and supervised approaches. © 2013 Elsevier B.V. All rights reserved.


Minuto A.,INTERACTION MEDIA GROUP | Nijholt A.,INTERACTION MEDIA GROUP
Proceedings of the 1st Workshop on Smart Material Interfaces: A Material Step to the Future, SMI 2012 | Year: 2012

In this paper we will present the design process and develop- ment of \Follow the Grass", our smart material interactive pervasive display, with related technical detailed explana- tion. We will present the design steps and prototypes with instructions for the use of smart materials (NiTiNOL) to create interaction. Copyright 2012 ACM.


Dadvar M.,INTERACTION MEDIA GROUP | De Jong F.,INTERACTION MEDIA GROUP
WWW'12 - Proceedings of the 21st Annual Conference on World Wide Web Companion | Year: 2012

As a result of the invention of social networks friendships, relationships and social communications have all gone to a new level with new definitions. One may have hundreds of friends without even seeing their faces. Meanwhile, alongside this transition there is increasing evidence that online social applications have been used by children and adolescents for bullying. State-of-the-art studies in cyberbullying detection have mainly focused on the content of the conversations while largely ignoring the users involved in cyberbullying. We propose that incorporation of the users' information, their characteristics, and post-harassing behaviour, for instance, posting a new status in another social network as a reaction to their bullying experience, will improve the accuracy of cyberbullying detection. Crosssystem analyses of the users' behaviour - monitoring their reactions in different online environments - can facilitate this process and provide information that could lead to more accurate detection of cyberbullying. Copyright is held by the International World Wide Web Conference Committee (IW3C2).


Poppe R.,INTERACTION MEDIA GROUP
Image and Vision Computing | Year: 2010

Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human-computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research. © 2009 Elsevier B.V. All rights reserved.


Poppe R.,INTERACTION MEDIA GROUP
Pattern Recognition | Year: 2012

Automatically naming faces in online social networks enables us to search for photos and build user face models. We consider two common weakly supervised settings where: (1) users are linked to photos, not to faces and (2) photos are not labeled but part of a users album. The focus is on algorithms that scale up to an entire online social network. We extensively evaluate different graph-based strategies to label faces in both settings and consider dependencies. We achieve results on a par with a recent multi-person approach, but with 60 times less computation time on a set of 300K weakly labeled faces and 1.4 M faces in user albums. A subset of the faces can be labeled with a speed-up of over three orders of magnitude. © 2011 Elsevier Ltd. All rights reserved.


Poppe R.,INTERACTION MEDIA GROUP | Truong K.P.,INTERACTION MEDIA GROUP | Heylen D.,INTERACTION MEDIA GROUP
Autonomous Agents and Multi-Agent Systems | Year: 2013

Artificial listeners are virtual agents that can listen attentively to a human speaker in a dialog. In this paper, we present two experiments where we investigate the perception of rule-based backchannel strategies for artificial listeners. In both, we collect subjective judgements of humans who observe a video of a speaker together with a corresponding animation of an artificial listener. In the first experiment, we evaluate six rule-based strategies that differ in the types of features (e. g. prosody, gaze) they consider. The ratings are given at the level of a speech turn and can be considered a measure for how human-like the generated listening behavior is perceived. In the second experiment, we systematically investigate the effect of the quantity, type and timing of backchannels within the discourse of the speaker. Additionally, we asked human observers to press a button whenever they thought a generated backchannel occurrence was inappropriate. Both experiments together give insights in the factors, both from an observation and generation point-of-view, that influence the perception of backchannel strategies for artificial listeners. © 2013 The Author(s).


Poppe R.,INTERACTION MEDIA GROUP
2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011 | Year: 2011

Face labeling is the process of assigning names to faces. In this paper, we start from a weakly-supervised setting where names are linked to photos, not faces. We introduce two face labeling strategies that scale well to large data sets and allow for labeling parts thereof. This is a useful property especially for data sets where photos are frequently added or (re)labeled. We evaluate our and two related face labeling strategies on a novel corpus of 34,763 faces, gathered from an online social network for dance party visitors. We achieve a speed-up of an order of magnitude over the state-of-the-art approach while the labeling quality is almost unaffected. On a subset of the faces, the speed-up is even more apparent, reaching at least two orders of magnitude. © 2011 IEEE.


Lohse M.,INTERACTION MEDIA GROUP
Proceedings - IEEE International Workshop on Robot and Human Interactive Communication | Year: 2012

More and more robots with social capabilities like speaking or perceiving users are being developed. However, only little is known about the question if these systems are actually treated in a social manner by their users. In particular, there is a lack of data about how users treat robots when other users are present. Therefore, this paper presents an analysis of users' linguistic behavior in interaction with a social robot. It intends to answer the questions of what users' verbal behavior tells us about their perception of the system and how completing a task in a group with another user influences their behavior towards a robot. In contrast to previous work, the paper does not entirely support the hypothesis that the users' linguistic behavior is mainly determined by their preconceptions. We found that rather the interpersonal alignment between the participants is the main factor influencing the behavior in the interaction. © 2012 IEEE.


Huisman G.,INTERACTION MEDIA GROUP
ICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction | Year: 2012

This position paper outlines the first stages in an ongoing PhD project on mediated social touch, and the effects mediated touch can have on someone's affective state. It is argued that touch is a profound communication channel for humans, and that communication through touch can, to some extent, occur through mediation. Furthermore, touch can be used to communicate emotions, as well as have immediate affective consequences. The design of an input device, consisting of twelve force-sensitive resistors, to study the communication of emotions through mediated touch is presented. A pilot study indicated that participants used duration of touch and force applied as ways to distinguish between different emotions. This paper will conclude by discussing possible improvements for the input device, how the pilot study fits with the overall PhD project, as well as future directions for the PhD project in general. Copyright 2012 ACM.


Loading INTERACTION MEDIA GROUP collaborators
Loading INTERACTION MEDIA GROUP collaborators