8. IMPORTANT RISKS NOT MENTIONED IN WHITE PAPER
● In all the forums where the ethical risks of AI are discussed, a set of common themes are mentioned: beneficence and non-maleficence, autonomy, justice and explicability. To our understanding the White Paper does not cover all the dimensions of the principles of Autonomy and Justice.
● Although the optimistic view of technology also predicts a generally positive contribution of AI to the ability to live a happy and fulfilling life, there are also important risks in this area for individuals, their values and their personal development. It is important to emphasize the warning of possible psychological damage, of how intelligent technology could negatively influence mental health and the development of personal potential, personal fulfillment, human fulfillment, and living a life with meaning and purpose.
No mention is made of important problems associated with the use of this technology such as the alteration of the concept of identity and the nature of human interactions, the difficulty of distinguishing between the real and the virtual, escapism towards virtual worlds, the replacement and deterioration of human bonds, cognitive overload, the loss of meaning and purpose due to being replaced by intelligent machines, etc. Nor is there any discussion of the potential loss of human values: wisdom, creativity, empathy, affection, social skills… It does not warn about the possibilities of control, manipulation, attack on autonomy, etc. that the field of affective computing opens up, given the susceptibility of humans to emotional influence. Nor does it warn of the important effects on mental and physical health that interactive immersive virtual reality applications (with intensive application of Artificial Intelligence techniques) could have. Experimental work with social science experts should be promoted with a view to assessing these risks.
● To ensure that AI really contributes to «human welfare» requires a prudent multidisciplinary and eco-centric approach (in harmony with an authentic anthropocentrism that recognizes the essence of the human being and his or her interests) to AI research, as opposed to an excessively techno-optimistic and technocratic approach. Given its disruptive power, we cannot assume out of hand that AI facilitates more efficient exploitation of resources; to simply assume that the market will regulate good uses of AI is to abdicate responsability.
● While implementing intelligent technologies with a purely economic or technocratic perspective could contribute to economic growth, as a counterpart, it could have an environmental and inequality cost. Some studies estimate that of the sub-targets into which the SDGs are decomposed, AI could contribute positively to 134 (79%) and act as an inhibitor to 59 (21%). The potential of AI to increase productivity could, in turn, increase the overexploitation of resources if economic, social and environmental variables are not integrated, which is not always the case in private sector studies. In addition, while AI can increase efficiency in energy production, advanced AI technology requires massive computing resources only available in large computer centers that require a lot of energy and have a high ecological and carbon footprint (this aspect is addressed in the White Paper).
● The recently published UN report by the Special Rapporteur on extreme poverty and human rights warns of the «risk of tripping over like zombies in a digital welfare dystopia» where «Big Tech has been a driver of growing inequality and has facilitated the creation of a vast digital underclass”. The report provides many well-documented examples in different countries of how dehumanized smart technologies are creating barriers to access to a range of social rights for those without Internet access and digital skills.
● One of the most important ethical requirements of AI is explainability and transparency, since algorithmic decisions can affect the most sensitive areas in people’s lives (health, civil and social rights, criminal law, credit). With machine learning, and particularly deep learning applications, however, explaining the decision process is very difficult. As an example of the effect of such lack of transparency, we cite a real life case in which a claimant was informed that he or she did not qualify for a government subsidy. When an explanation was demanded, the only one given was that an algorithm had made the decision. A request to see the algorithm was refused on intellectual property grounds since the decision-making task had been subcontracted to a private company.
● Other relevant risks are commercial and political manipulation, and intensive coercion and surveillance by governments and large corporations, which can damage social cohesion and contravene democratic principles and human rights.
● Privacy is another well-known risk, and though it is treated in the White Paper, in our view, important aspects are not addressed. Privacy regulations must protect people but should also offer solutions to the social and public use of data, since the «non-use» of data in circumstances in which there is a clear public interest in its use is a social disadvantage. So far privacy has mainly served to protect the interests of private companies while what is needed is a move towards models of responsible data sharing. There is a need for a clear public policy on the use of data oriented towards the common good, especially data generated by public institutions (for example open access to results of publicly-funded research).
● The list of high-risk areas mentioned in the White Paper (health, transport, energy and parts of the public sector such as asylum, migration, border control and justice, social security and employment services), is not very complete. Applications that may seem harmless a priori (in marketing or financial or insurance services, or in social assistance provided by NGOs, etc.), may bring about threats to rights if they produce discriminatory, biased results, etc. It seems risky to leave this list open to future revisions and amendments depending on relevant developments in practice, instead of carrying out a deeper analysis now.
● In general AI requires a proactive approach to risk management, involving continual risk identification and handling.