Skip to content
English
  • There are no suggestions because the search field is empty.

What measures have been taken to prevent discrimination or bias?

DeepVA applies data balancing, bias detection, and focal loss during training to ensure fair and non-discriminatory AI predictions.

Disclaimer:
This FAQ provides brief answers to frequently asked questions and serves as general guidance only. It does not replace legal advice or binding documentation. For the most up-to-date and legally relevant information, please refer to our official legal documentation at Deepva.ai/legal/.
If you have any legal questions or concerns, feel free to contact us directly.

Multiple measures are implemented to prevent discrimination or bias, addressing both data preparation and model training:

  • Data preparation:
    Datasets are carefully analyzed, cleaned, and balanced to ensure that no social or demographic group is underrepresented. Biased patterns or content are actively identified and removed.

  • Data distribution:
    The standard deviation of variance is monitored to ensure an even distribution of data features.

  • Model training:
    During neural network training, focal loss functions are used to give more weight to hard-to-predict or underrepresented classes, helping reduce prediction bias.

These efforts aim to enable fair, balanced, and non-discriminatory decisions by the AI system.