Contents 9

9. CONFLICTS BETWEEN IA AND FUNDAMENTAL RIGHTS

● The White Paper refers to the defense of human rights, but only develops the arguments with regard to civil and political rights (privacy, political rights and freedoms), ignoring social, economic and cultural rights. This is closely related to concerns about the inequality that the massive use of AI could generate.

● As stated in the Zaragoza Declaration (2019): «Sometimes, when it comes to setting responsibilities, it is easy to mistake values for means. That is to say, to establish commitments, but not about what is necessary to protect, nor about what is socially in conflict, rather about the technological principles an artifact is designed with (for example transparency). By focusing on ensuring transparency to guarantee privacy it is understood that it is the user who should exercise the role of supervisor. Is this the best way to legislate such a complex sector? If we look at the food industry, for example, the consumer is not expected to inspect and investigate everything he or she eats. Control is delegated to regulators and inspectors, while the consumer is simply expected to know his or her rights, and how to access appropriate channels in the case of any problems.

The fact of being positioned in one-way or another represents antagonistic conceptions regarding the design of a supervisory system: either user centred or transparency governance. There are values that will never appear in the first approach because they are collective claims. Once we stop viewing the end user as the only person affected by the use of technology, we can begin to consider the effects on society as a whole. This makes it possible for us to consider plurality, cohesion, sustainability and cooperation in technological development. This will only come from public debate and shared reflection.

It is, more than ever, necessary to create environments for public debate, where dialogues between researchers, developers and other members of society can be held. The promotion of spaces for continuous communication about the social and ethical implications of Artificial Intelligence systems is vital».

Assuming the need for transparent governance of AI, we believe that it would be highly advisable to include research on verifying the fairness of algorithms in the areas of research to be prioritized, with a view to integrating such verification mechanisms in a certification process to be carried out by an independent authority.

Quintanilla coined the term «endearing technologies», as opposed to “alienating technologies”, this idea being consistent with the Responsible Research paradigm. We believe that such technologies are more respectful of fundamental rights. Some of their characteristics, as defined by Quintanilla, are: openness (the software is free); versatility (it allows alternative uses); docility (its operation and control depends on a human operator); no planned obsolescence (repair is promoted more than replacement); comprehensibility (basic but comprehensive documentation).