What guarantees do we have about the transparency of choices made independently by a machine?
What methodologies have been developed to check that the algorithms are working properly?
The increasing role played by algorithms in public policy decision-making calls into question legal, political, and ethical legitimacy for the decisions made by automated systems. The problem arises in particular for Artificial Intelligence systems when decision-making involves administrative processes, areas of public interest or the exercise of fundamental rights.
Algorithm accountability must underlie those choices that may be discriminatory, unfair, or otherwise capable of influencing individual and collective behaviour.
The topic was raised at European level with the Communication 168 in 2019 and the recent publication of the white paper on Artificial Intelligence. From a legal framework, the need emerges to identify tools that are capable of verifying and ascertaining the function of algorithms according to public interest, while respecting business and intellectual property rights.