Contents 6

6. ON THE NEED FOR ETHICAL REFLECTION AND FOR METHODOLOGIES AND TECHNIQUES TO SUPPORT APPLIED ETHICS

Although manuals of ethics and good business practices are necessary, there is also a need in the academic sphere for independent and scientifically rigorous research with an empirical dimension since, to date, this dimension has been mostly lacking. Among the lines of research and development that should be encouraged and subsidized, priority could be given to the following:

● Ethical-philosophical reflection at a theoretical level. There is a need to bring order to the current landscape of overabundance of ethical codes, guidelines and frameworks, many of which suffer from deficiencies such as lack of scientific rigor, subjectiveness, incoherence, superficiality and redundancy, thus generating confusion. Moreover, however many current codes, guidelines and frameworks  are produced, the debate is far from closed. In addition, at every step new dilemmas arise from the development and deployment of new applications and from gathering more experience with existing ones, to which is added the permanent evolution of society driven in part by the very use of these technologies.

● The concept of applied ethics tools.  There has been much debate and research into identifying risks but very little on how to mitigate them. Ethical principles codes of conduct and legislation are necessary, but to apply these  in a practical way requires tools. The solution to mitigating AI risks can also come from tools that incorporate AI techniques.  A coordinated multidisciplinary effort involving researchers, innovators, citizens, legislators, politicians, developers… is needed to create and evaluate these tools. Again, multidisciplinarity is essential in order to give full meaning and from different perspectives to the concepts of explicability, transparency, etc., in order to understand the complexity of human behavior and the impacts that AI technologies can have on it, in order to interpret the algorithmic predictions… In addition, the plurality of values not only of the professionals and producers but also that of the society in general needs to be protected. Many international initiatives conclude that ethics should be embedded in the process of designing, developing, deploying and using intelligent technology, and that ethical principles need to be translated into protocols for design, development, deployment and use. Specific methodological and technical tools to support the development and use of AI applications that meet ethical standards and comply with legislation are required for each development phase. These tools are not meant to replace legislation nor ethics and good practice manuals, but to support their implementation. Academic research, private sector self-regulation and legislation are necessary and complementary actions.

● An approach of growing importance is to consider the ethical behaviour imbued in artificial intelligence systems as a form of control, the idea being to imbue the autonomous entity with values from its conception. As AI achieves more agency, the question of responsibility (a very important legal issue) arises.

● We stress the importance of avoiding a purely economic, technocratic approach. AI impact studies should be wide-ranging, which requires multidisciplinary teams (ethics scholars, technical experts, sociologists, psychologists, philosophers…) and should take into account the dynamicity of AI impacts.

Wide-ranging impact-assessment frameworks necessarily involve gathering, sharing and processing large amounts of multidimensional data and may therefore require large-scale investments in infrastructure.  A data-rich ecosystem would allow the SDGs to be used as the impact-assessment framework, by means of indicators that cover multiple perspectives, including social costs and benefits (in terms of schooling, life expectancy, access to basic services and environment, among others), and taking into consideration the cultural values of the communities concerned. Measurement of the corresponding impact indicators should not be too locally-implemented or company-specific firstly, since a significant part of the overall impact may take the form of indirect, and possibly unexpected, effects occurring outside the area of measurement and secondly, to facilitate comparisons. Thus, common, large-scale impact-measurement frameworks would be more appropriate, in which context AI ethical considerations acquire a social and political dimension, i.e. macro-ethics and not just micro-ethics.