Skip to content
English
  • There are no suggestions because the search field is empty.

What is the Face Attributes Module?

Comprehensive Face Analysis with Features, Filters, and Detailed Results

Module Description

The Face Attributes Module in DeepVA provides detailed analysis of faces detected in images or videos. This module can identify a wide range of attributes such as age, gender, emotions, and specific facial features like the presence of glasses or facial hair. By analyzing these attributes, you can gain valuable insights about the people detected in your content, which can be useful for various applications like media analysis, audience insights, and contextual ad placements.

Face Recognition:
Face Attributes concentrates on the detection of specific attributes but also makes use of our face recognition. For more sophisticated Face Recognition functions use the Face Recognition module and combine it with the Face Attributes.

Face attributes for contextual ad placements:
This data helps advertisers align ads more effectively with the audience’s mood, age, or gender, ensuring that the right message reaches the right viewers at the right time. For example, an ad for skincare products might be better placed in content where younger audiences or individuals expressing joy are detected, leading to higher engagement and relevance. By leveraging face attributes, broadcasters can automatically create more personalized, targeted advertising experiences.


How does it work?

  1. Select the Media File: Choose the media file you want to analyze.
  2. Activate the Face Attributes Module: In the left column, select the "Face Attributes" module.
  3. Start the Analysis: You can either add more modules or begin the analysis immediately by clicking "Start Analysis"

What Parameters are available?

This module has no parameters.


Displaying the Results:

Timeline:

The timeline, located below the player, displays the entire video runtime and the results from each module as gray bars.
  • By clicking on any of the grey result bars, you will see details such as:
    • Name of the Person / Unknown
    • Timecode (TC)
    • Exact frame numbers
    • Runtime/Duration
  • Clicking on a result moves the playhead to the beginning of that result.
  • These results are not identical to those provided by the API, since the API also returns the geometric location of the face for bounding boxes and a head pose estimate. However, the results in the user interface are presented in a more user-friendly, graphical format. If there are multiple results, use your mouse wheel to scroll through the timeline.

Search Field:

Located in the top bar, the search field includes filter settings for refining your results.
  • Sorting: Results can be sorted chronological, by person name, or by mean similarity. You can toggle between ascending and descending order.
  • Person: Use this text field to search for a certain person by the name.
  • Similarity: The similarity slider filters results based on how closely the recognized identity matches the nearest example in the training data (for example the celerity dataset)
  • Filter by Face Attributes:
    Beard, Mustache, Eyeglasses, Smile, Sunglasses, Mouth Open, Gender, Emotion, Age

After adjusting filters, click "Apply" to apply them. Active filters appear in a black box beneath the search field and can be cleared by clicking the X symbol.


Module Section

On the right side of the player, you’ll see a section with detailed results for each module used in the analysis. Clicking on the module name opens a dropdown with specific parameters, useful for troubleshooting or viewing metadata.


Result Cards

Results are displayed as cards in chronological order. Each card provides key information, such as:

    • Name of the result: Whether the person is identified or marked as "Unknown."

    • Toggle controls to go to the next or previous image of the person

    • Analysis results of following Face Attributes:

        • Age
          Provides an estimated age range for the person:
          min:
          The lower end of the estimated age range
          max: The upper end of the estimated age range

        • Gender
          Predicts the gender of the person:
          value: The predicted gender (either "male" or "female")
          confidence: How confident the AI is in the prediction, ranging from 0.0 to 1.0

        • Emotion
          Detects the primary emotion expressed on the face:
          value:
          The predicted emotion (options include "Happy," "Sad," "Angry," "Confused," "Disgusted," "Surprised," "Calm," and "Fear")
          confidence:
          The AI’s confidence in the emotion detection (0.0 - 1.0)

        • Eyeglasses
          Determines if the person is wearing eyeglasses:
          value: A boolean value (true or false) indicating if glasses are detected, displayed via checkmark.
          confidence: The confidence level of the prediction (0.0 - 1.0)

        • Sunglasses
          Checks if the person is wearing sunglasses:
          value:
          A boolean value (true or false) indicating if glasses are detected, displayed via checkmark.
          confidence:
          The confidence level of the prediction (0.0 - 1.0)

        • Beard
          Determines if the face has a beard:
          value:
          A boolean value (true or false) indicating if a beard is present, displayed via checkmark.
          confidence:
          The confidence level of the prediction (0.0 - 1.0)

        • Mustache
          Checks if the face has a mustache:

          value: A boolean value (true or false) indicating if a mustache is present, displayed via checkmark.
          confidence: The confidence level of the prediction (0.0 - 1.0)

        • Eyes Open
          Detects if the person’s eyes are open:
          value: A boolean value (true or false) indicating if the eyes are open, displayed via checkmark.
          confidence: The confidence level of the prediction (0.0 - 1.0

        • Mouth Open
          Determines if the person’s mouth is open:
          value: A boolean value (true or false) indicating if the mouth is open, displayed via checkmark.
          confidence: The confidence level of the prediction (0.0 - 1.0)

    Additional Face Attributes:
    Via the detailed result in the API you get additional Face Attributes, which are not displayed in the frontend. These include:

    • Bounding Box
      The location of the face in the image defined by a rectangle using two points:

        • x1, y1: The coordinates of the top-left corner
        • x2, y2: The coordinates of the bottom-right corner
    • Pose
      Estimates the head pose based on three angles:
      • roll: Rotation of the head (tilting sideways)
      • yaw: Side-to-side movement (head turning left or right)
      • pitch: Up-and-down movement (head nodding up or down)
    • Sharpness
      A sharpness score is assigned to indicate how clear the face appears in the image (a low score suggests the face is blurry).

      Read here for more information on the API results.