Loading
The FAR-SEE project aims to study the issues of sampling bias, fairness, uncertainty and explicability of these features for Artificial Intelligence (AI)-based face recognition systems, with the aim of improving existing algorithms, revealing 'optimal' performance/fairness/explicability trade-offs and thus formulating the principles of operational regulation of these systems. The objectives are therefore manifold. 1) To develop methods for detecting/correcting selection biases during learning (described not only by 'sensitive' variables such as gender or age, but also by the physiognomy of individuals and image characteristics, e.g. brightness), for ensuring fairness (an acceptable level of performance disparity between 'sensitive groups') without deteriorating performance, and for assessing the uncertainty involved in measuring performance and fairness metrics. 2) Explain the nature of proven sampling biases, the uncertainty inherent in performance/equity measurement, and the level of inequity measured, so as to be able to improve the methods developed to achieve objective 1). 3) In the light of the trade-offs between uncertainty, performance, fairness and explicability, describe the nature of acceptable/operational regulatory constraints that reconcile the constraints to be met by facial recognition systems. The project brings together three complementary partners with long-standing collaborative experience. It will draw on the expertise of IDEMIA's R&D team in facial recognition technologies and its knowledge of regulatory issues, the skills of the LTCI laboratory at Télécom Paris in the field of trustworthy AI, and those of the I3 laboratory in questions of ethics and operational regulation of AI.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::69cee3568549e384249a0e6b85735220&type=result"></script>');
-->
</script>