Waltham, MA, United States
Waltham, MA, United States
SEARCH FILTERS
Time filter
Source Type

Patent
Affectiva | Date: 2016-11-21

Analysis of mental state data is provided to enable video recommendations via affect. Analysis and recommendation is made for socially shared live-stream video. Video response is evaluated based on viewing and sampling various videos. Data is captured for viewers of a video, where the data includes facial information and/or physiological data. Facial and physiological information is gathered for a group of viewers. In some embodiments, demographic information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.


Analytics are used for live streaming based on image analysis within a shared digital environment. A group of images is obtained from a group of participants involved in an interactive digital environment. The interactive digital environment can be a shared digital environment. The interactive digital environment can be a gaming environment. Emotional content within the group of images is analyzed for a set of participants within the group of participants. Results of the analyzing of the emotional content within the group of images are provided to a second set of participants within the group of participants. The analyzing emotional content includes identifying an image of an individual, identifying a face of the individual, determining facial regions, and performing content evaluation based on applying image classifiers.


Images are analyzed using sub-sectional component evaluation in order to augment classifier usage. An image of an individual is obtained. The face of the individual is identified, and regions within the face are determined. The individual is evaluated to be within a sub-sectional component of a population based on a demographic or based on an activity. An evaluation of content of the face is performed based on the individual being within a sub-sectional component of a population. The sub-sectional component of a population is used for disambiguating among content types for the content of the face. A Bayesian framework that includes a conditional probability is used to perform the evaluation of the content of the face, and the evaluation is further based on a prior event that occurred.


Image analysis is performed on collected data from a person who interacts with a rendering such as a website or video. The images are collected through video capture. Physiological data is captured from the images. Classifiers are used for analyzing images. Information is uploaded to a server and compared against a plurality of mental state event temporal signatures. Aggregated mental state information from other people who interact with the rendering is received, including video facial data analysis of the other people. The received information is displayed along with the rendering through a visual representation such as an avatar.


Patent
Affectiva | Date: 2016-12-16

Analysis of mental states is provided using web servers to enable data analysis. Data is captured for an individual where the data includes facial information and physiological information. Data that was captured for the individual is compared against a plurality of mental state event temporal signatures. Analysis is performed on a web service and the analysis is received. The mental states of other people are correlated to the mental state for the individual. Other sources of information are aggregated, where the information is used to analyze the mental state of the individual. Analysis of the mental state of the individual or group of individuals is rendered for display.


Image content is analyzed in order to present an associated representation expression. Images of one or more individuals are obtained and the processors are used to identify the faces of the one or more individuals in the images. Facial features are extracted from the identified faces and facial landmark detection is performed. Classifiers are used to map the facial landmarks to various emotional content. The identified facial landmarks are translated into a representative icon, where the translation is based on classifiers. A set of emoji can be imported and the representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face. The selected emoji can be static, animated, or cartoon representations of emotion. The individuals can share the selected emoji through insertion into email, texts, and social sharing websites.


Patent
Affectiva | Date: 2016-09-12

Mental state event signatures are used to assess how members of a specific social group react to various stimuli such as video advertisements. The likelihood that a video will go viral is computed based on mental state event signatures. Automated facial expression analysis is utilized to determine an emotional response curve for viewers of a video. The emotional response curve is used to derive a virality probability index for the video. The virality probability index is an indicator of the propensity to go viral for a given video. The emotional response curves are processed according to various demographic criteria in order to account for cultural differences amongst various demographic groups and geographic regions.


Patent
Affectiva | Date: 2016-02-01

Mental state data is collected as a person interacts with a game played on a machine. The mental state data includes facial data, where the facial data includes facial regions or facial landmarks. The mental state data can include physiological data and actigraphy data. The mental state data is analyzed to produce mental state information. Mental state data and/or mental state information can be shared across a social network or a gaming community. The affect of the person interacting with the game can be represented to the social network or gaming community in the form of an avatar. Recommendations based on the affect resulting from the analysis can be made to the person interacting with the game. Mental states are analyzed locally or via a web services. Based on the results of the analysis, the game with which the person is interacting is modified.


Patent
Affectiva | Date: 2016-09-23

Facial expressions are evaluated for control of robots. One or more images of a face are captured. The images are analyzed for mental state data. The images are analyzed to determine a facial expression of the face within an identified a region of interest. Mental state information is generated. A context for the robot operation is determined. A context for the individual is determined. The actions of a robot are then controlled based on the facial expressions and the mental state information that was generated. Displays, color, sound, motion, and voice response for the robot are controlled based on the facial expressions of one or more people.


Patent
Affectiva | Date: 2016-03-04

Facial evaluation is performed on one or more videos captured from an individual viewing a display. The images are evaluated to determine whether the display was viewed by the individual. The individual views a media presentation that includes incorporated tags and is rendered on the display. Based on the tags, video of the individual is captured and evaluated using a classifier. The evaluating includes determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. An engagement score and emotional responses are determined for media and images provided on the display.

Loading Affectiva collaborators
Loading Affectiva collaborators