Facial expression analysis

FaceReader Accuracy

FaceReader is the most reliable software tool for facial expression analysis (source).

FaceReader is the most robust automated system for the recognition of a number of specific properties in facial images, including the six basic or universal expressions: happy, sad, angry, surprised, scared, and disgusted. Additionally, FaceReader can recognize a 'neutral' state and analyze 'contempt'.

The software immediately analyzes your data (live, video, or still images), saving valuable time. The option to record audio as well as video makes it possible to hear what people have been saying – for example, during human-computer interactions, or while watching stimuli.

The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions.

Customer testimonial

"FaceReader is a very friendly software that has improved the scope of our research, opening to new questions and to rethink our experiments."

Miguel Ibaceta, MSc. | Pontificia Universidad Catolica de Chile

Determine facial expressions in three steps with FaceReader

  1. Face finding - an accurate position of the face is found.
  2. Face modeling – the Active Appearance Model is used to synchronize an artificial face model, which describes the location of over 500 key points as well as the texture of the face. These outcomes are combined with the results of the Deep Face algorithm to achieve a higher classification accuracy. When face modeling is not successful (for example, when a hand is covering the mouth, but both eyes can be found), the Deep Face algorithm, based on Deep Learning, takes over.
  3. Face classification – output is presented as seven basic expressions and one neutral state. 

To save you valuable time when analyzing videos, FaceReader also automatically classifies:

  • mouth open-closed 
  • eyes open-shut 
  • eyebrows raised-neutral-lowered 
  • head orientation
  • gaze direction 

Additionally, FaceReader can classify faces based on the following characteristics: gender, age, ethnicity, and facial hair (beard and/or moustache). Other independent variables can be entered manually.

FaceReader is the complete solution for facial expression analysis!

Deep Learning: analyze faces under challenging circumstances

With the classification engine Deep Face Model, FaceReader can make sense of large amounts of complex data. What does the Deep Face Model do exactly?

The Deep Face Model makes use of deep learning, which  is based on an artificial neural network with multiple layers between the input and the output. The network moves through the layers calculating the probability of each output. 

Currently it is the most successful artificial intelligence technique in machine learning. Like in real neural networks, information on the input side is collected and processed by neurons that are connected with each other. Mapping of input to output goes via a series of nonlinear computations, clubbing together lower levels of information to form higher level features (e.g. expressed emotion, age, gender).

For more information about Deep Learning we refer you to the articles of Gudi et al.:

Components of FaceReader

FaceReader is a complete end-to-end solution that consists of the following components:

FaceReader is used at over 700 sites worldwide. It scores between 91% and 100%, depending on which emotion is measured, when comparing FaceReader outcomes with the facial expressions scored manually by the professional annotators.

Facial expression analysis

White paper
FaceReader methodology note

Request the FREE FaceReader methodology note to learn more about facial expression analysis theory.

  • Learn what FaceReader is and how it works
  • Learn how the calibration works
  • Get insight in quality of analysis and output

FaceReader Demonstration

Curious what emotions your own face shows? In this demo the facial expression of a person is automatically extracted from a single picture. Additionally, FaceReader is capable of extracting some personal characteristics, like gender, facial hair, an age indication and whether a person is wearing glasses or not. This online demonstration lets you analyze images containing a face, by entering an URL or uploading a file.

Participant emotion analysis

Facial expressions can be visualized as bar graphs, in a pie chart, and as a continuous signal. A gauge display summarizes the negativity or positivity of the emotion (valence). The timeline gives you a detailed visual representation of the data. A separate reporting window displays a pie chart with percentages, a smiley, and a traffic light, indicating whether a person’s mood is positive, neutral, or negative. All visualizations are given to you in real-time and may be viewed afterwards. With the Project Analysis Module, advanced facial expression analysis has become available in FaceReader.

Circumplex model of affect

The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions. FaceReader offers a real-time representation of this model with the horizontal axis representing the valence dimension (pleasant - unpleasant) and the vertical axis representing the arousal dimension (active - inactive).

Facial expressions automatically measured with FaceReader can be represented at any level of valence and arousal. Circumplex models are commonly used to assess liking in marketing, consumer science, and psychology (Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39 (6), 1161).

FaceReader also includes arousal and the circumplex model of affect.

Privacy by design

FaceReader is installed on-site and adheres to strict privacy-by-design protocols. For example, the software offers you the option not to record the test participants face during the analysis. In this case only metadata are acquired from the recordings of the face that cannot be related to an identifiable person. Examples of metadata are facial expressions, head pose, age and gender. For more details please refer to our privacy policy.