Waltham, MA, United States
Waltham, MA, United States

Time filter

Source Type

Patent
Affectiva | Date: 2016-11-21

Analysis of mental state data is provided to enable video recommendations via affect. Analysis and recommendation is made for socially shared live-stream video. Video response is evaluated based on viewing and sampling various videos. Data is captured for viewers of a video, where the data includes facial information and/or physiological data. Facial and physiological information is gathered for a group of viewers. In some embodiments, demographic information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.


Patent
Affectiva | Date: 2015-01-15

Mental state data is gathered from a plurality of people and analyzed in order to determine mental state information. Metrics are generated based on the mental state information gathered as the people view media presentations. Norms, defined as the quantitative measures of the mental states of a plurality of people as they view the media presentation, are determined based on the mental state information metrics. The norms can be determined based on various viewer criteria including country of residence, demographic group, or device type on which the media presentation is viewed. Responses to new media are then compared against norms to determine the effectiveness of the new media presentations.


Patent
Affectiva | Date: 2016-09-12

Mental state event signatures are used to assess how members of a specific social group react to various stimuli such as video advertisements. The likelihood that a video will go viral is computed based on mental state event signatures. Automated facial expression analysis is utilized to determine an emotional response curve for viewers of a video. The emotional response curve is used to derive a virality probability index for the video. The virality probability index is an indicator of the propensity to go viral for a given video. The emotional response curves are processed according to various demographic criteria in order to account for cultural differences amongst various demographic groups and geographic regions.


Patent
Affectiva | Date: 2015-12-07

An individual can exhibit one or more mental states when reacting to a stimulus. A camera or other monitoring device can be used to collect, on an intermittent basis, mental state data including facial data. The mental state data can be interpolated between the intermittent collecting. The facial data can be obtained from a series of images of the individual where the images are captured intermittently. A second face can be identified, and the first face and the second face can be tracked.


Patent
Affectiva | Date: 2015-03-16

Expression analysis is performed in response to a request for an expression. The expression is related to one or more mental states. The mental states include happiness, joy, satisfaction, and pleasure, among others. Images from one or more cameras capturing a users attempt to provide the requested expression are received and analyzed. The analyzed images serve to gauge the response of the person to the request. Based on the response of the person to the request, the person can be rewarded for the effectiveness of his or her mental state expression. The intensity of the expression can also be used as a factor in determining the reward. The reward can include, but is not limited to, a coupon, digital coupon, currency, or virtual currency.


Patent
Affectiva | Date: 2016-02-01

Mental state data is collected as a person interacts with a game played on a machine. The mental state data includes facial data, where the facial data includes facial regions or facial landmarks. The mental state data can include physiological data and actigraphy data. The mental state data is analyzed to produce mental state information. Mental state data and/or mental state information can be shared across a social network or a gaming community. The affect of the person interacting with the game can be represented to the social network or gaming community in the form of an avatar. Recommendations based on the affect resulting from the analysis can be made to the person interacting with the game. Mental states are analyzed locally or via a web services. Based on the results of the analysis, the game with which the person is interacting is modified.


Patent
Affectiva | Date: 2016-09-23

Facial expressions are evaluated for control of robots. One or more images of a face are captured. The images are analyzed for mental state data. The images are analyzed to determine a facial expression of the face within an identified a region of interest. Mental state information is generated. A context for the robot operation is determined. A context for the individual is determined. The actions of a robot are then controlled based on the facial expressions and the mental state information that was generated. Displays, color, sound, motion, and voice response for the robot are controlled based on the facial expressions of one or more people.


Patent
Affectiva | Date: 2015-09-08

Concepts for facial tracking with classifiers is disclosed. One or more faces are detected and tracked in a series of video frames that include at least one face. Video is captured and partitioned into the series of frames. A first video frame is analyzed using classifiers trained to detect the presence of at least one face in the frame. The classifiers are used to initialize locations for a first set of facial landmarks for the first face. The locations of the facial landmarks are refined using localized information around the landmarks, and a rough bounding box that contains the facial landmarks is estimated. The future locations for the facial landmarks detected in the first video frame are estimated for a future video frame. The detection of the facial landmarks and estimation of future locations of the landmarks are insensitive to rotation, orientation, scaling, or mirroring of the face.


Patent
Affectiva | Date: 2016-03-04

Facial evaluation is performed on one or more videos captured from an individual viewing a display. The images are evaluated to determine whether the display was viewed by the individual. The individual views a media presentation that includes incorporated tags and is rendered on the display. Based on the tags, video of the individual is captured and evaluated using a classifier. The evaluating includes determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. An engagement score and emotional responses are determined for media and images provided on the display.


Patent
Affectiva | Date: 2015-07-10

Analysis of mental states is provided based on videos of a plurality of people experiencing various situations such as media presentations. Videos of the plurality of people are captured and analyzed using classifiers. Facial expressions of the people in the captured video are clustered based on set criteria. A unique signature for the situation to which the people are being exposed is then determined based on the expression clustering. In certain scenarios, the clustering is augmented by self-report data from the people. In embodiments, the expression clustering is based on a combination of multiple facial expressions.

Loading Affectiva collaborators
Loading Affectiva collaborators