Entity

Time filter

Source Type

Waltham, MA, United States

Video of one or more people is obtained and analyzed. Heart rate information is determined from the video and the heart rate information is used in mental state analysis. The heart rate information and resulting mental state analysis are correlated to stimuli, such as digital media which is consumed or with which a person interacts. The heart rate information is used to infer mental states. The mental state analysis, based on the heart rate information, can be used to optimize digital media or modify a digital game.


Patent
Affectiva | Date: 2014-03-15

Mental state analysis is performed by obtaining video of an individual as the individual interacts with a computer, either by performing various operations or by consuming a media presentation. The video is analyzed to determine eye-blink information on the individual, such as eye-blink rate or eye-blink duration. A mental state of the individual is then inferred based on the eye blink information. The blink-rate information and associated mental states can be used to modify an advertisement, a media presentation, or a digital game.


Patent
Affectiva | Date: 2015-12-07

An individual can exhibit one or more mental states when reacting to a stimulus. A camera or other monitoring device can be used to collect, on an intermittent basis, mental state data including facial data. The mental state data can be interpolated between the intermittent collecting. The facial data can be obtained from a series of images of the individual where the images are captured intermittently. A second face can be identified, and the first face and the second face can be tracked.


Patent
Affectiva | Date: 2015-07-10

Analysis of mental states is provided based on videos of a plurality of people experiencing various situations such as media presentations. Videos of the plurality of people are captured and analyzed using classifiers. Facial expressions of the people in the captured video are clustered based on set criteria. A unique signature for the situation to which the people are being exposed is then determined based on the expression clustering. In certain scenarios, the clustering is augmented by self-report data from the people. In embodiments, the expression clustering is based on a combination of multiple facial expressions.


Patent
Affectiva | Date: 2015-09-08

Concepts for facial tracking with classifiers is disclosed. One or more faces are detected and tracked in a series of video frames that include at least one face. Video is captured and partitioned into the series of frames. A first video frame is analyzed using classifiers trained to detect the presence of at least one face in the frame. The classifiers are used to initialize locations for a first set of facial landmarks for the first face. The locations of the facial landmarks are refined using localized information around the landmarks, and a rough bounding box that contains the facial landmarks is estimated. The future locations for the facial landmarks detected in the first video frame are estimated for a future video frame. The detection of the facial landmarks and estimation of future locations of the landmarks are insensitive to rotation, orientation, scaling, or mirroring of the face.

Discover hidden collaborations