
Research Lines:
The notion of trustworthiness lies at the core of the European ethical approach to artificial intelligence (AI). To design, verify and develop trustworthy AI (TAI) is also the goal of the BRIO project, financed by the Italian Ministry of Research. The BRIO project aims at investigating means to avoid bias, mitigating risk and overcoming opacity. More precisely, this project focuses on developing design criteria for TAI based on philosophical analyses of trust combined with their symbolic formalization and technical implementation. The analysis of the epistemological and ethical components of trust, Objective 1 of the BRIO Project, is led by the META group (Social Sciences and Humanities for Science and Technology at Politecnico di Milano. This part of the project is in charge of Prof. Viola Schiaffonati, DEIB, and Prof. Daniele Chiffi, DAStU.
According to the ethics guidelines for TAI (HLEGAI 2019), prevention of harm is one of the key principles to achieve trust, together with respect for autonomy, transparency and explicability. There are several ways in which the prevention of harm can be concretely carried out, but one of the goals of the BRIO project is to focus on the identification and mitigation of risk at both the data and algorithmic level. Risk can be defined in different ways. A classical definition of risk conceptualizes it as the probability of an adverse event being evaluated in conjunction with its consequences in a specific lapse of time. This classic idea of risk lies at the core of probabilistic risk assessment, where the confidence of probabilistic estimations makes predicting and evaluating the consequences of an adverse event possible. Unfortunately, the probabilistic risk assessment of technologies is not always possible and is certainly very difficult in the case of AI technologies, which operate today mostly under high-risk conditions and uncertainty.
Coping with uncertainty is a way to increase trust in AI. Trust is becoming of paramount importance in our societies, where many decisions affecting people’s lives are made with the increasing support of AI systems. Technical solutions are not enough, as it is not always possible to model all important aspects concerning risk and uncertainty in advance, given that some emerge during the use and interaction of AI technologies. Additionally, neither ethical guidelines nor frameworks are sufficient, as they are very abstract and sometimes lack concrete hints on how to deal with specific problems. A real integration of conceptual analysis—exploiting the strengths of both epistemology and ethics—with the design of an appropriate solution is needed. The BRIO project aims at this integration and acknowledging the intrinsic complexity of these elements, while addressing them with different conceptual tools, is the first step.