For an ethical use of artificial intelligence
August 12, 2024Artificial intelligence, to put it mildly, is now present in almost every area of our lives. Whether in the form of tools or content developed by generative AI, we are now confronted with it in our entertainment consumption, in our transportation, in our homes or even in our medical follow-ups. We could debate at length the merits of such interventions, and the pros and cons of this irruption of AI into our daily lives. Digital content creators, who are at the heart of our virtual experience production industry, have quite legitimately questioned, criticized and worried about the impact that generative AI could have on their professional lives. Others, on the contrary, have praised the technology’s potential to open up their creative horizons, or to focus on the conceptual essence of their art and less on its execution.
However, what most agree on, in digital creation as elsewhere, is the importance of an ethical framework for AI. A number of organizations, governments and research groups have been looking into what the guiding principles of such a framework might be. By way of example, UNESCO adopted its “Recommendation on the Ethics of Artificial Intelligence” at its General Conference on November 24, 2021, following closely on the heels of the OECD, which adopted its “Council Recommendation on Artificial Intelligence” in 2019. Closer to home, in 2018 the Université de Montréal proposed its “Déclaration de Montréal pour un développement responsable de l’intelligence artificielle”, addressing “political leaders as well as all individuals, civil society organizations and companies wishing to participate in the development of AI in a responsible manner”. At the time of writing, the declaration has 2830 citizens and 277 organizations and companies as signatories. Among the latter are the City of Montreal, the Autorité des marchés financiers, and the Axel Center – Research Center of the University Institute of Mental Health of Montreal, demonstrating, if proof were needed, the great diversity of AI applications.
Transparency and respect
If we look in more detail at the Montreal Declaration, which has the advantage of being both evolutionary and participatory, as well as broadly encompassing the different industries and fields concerned, we can find ten principles, presented here in no hierarchical order.
The first is the principle of well-being, which states the importance of developing AI tools and systems that promote and improve the “living, health and working conditions” of individuals, and proscribes AI that would contribute to “increasing stress, anxiety and feelings of harassment linked to the digital environment”. It is followed by the principle of respect for autonomy, enabling “individuals to increase their control over their lives and their environment”. It defends the need to “empower citizens in the face of digital technologies by ensuring access to different types of relevant knowledge, the development of structuring skills (digital and media literacy) and the formation of critical thinking skills”. Thus, “AI must not be developed to propagate unreliable information, lies and propaganda, and should be designed with the aim of reducing their spread”. This principle brings us back not only to the need to equip ourselves adequately with AIs that tackle the dismantling of false information, but also, of course, to develop programs and strategies to train and sensitize audiences to the issues and realities of AI. In this respect, there are a number of initiatives in Quebec, such as those run by the Mila research institute or, for young audiences, by the non-profit organization Génielab, which offers an integrated discovery course to be experienced in schools or within families.
This is followed by the fundamental principle of the protection of intimacy and privacy, guaranteeing people a defense against “surveillance or digital evaluation”, the possibility of always having “the choice of digital disconnection” or even “extensive control over information relating to their preferences”. It’s easy to see how failure to respect this principle can lead to serious societal drift, particularly in the context of public administration or relations between citizens and private organizations. This principle goes hand in hand with that of democratic participation set out in the Montreal Declaration, which emphasizes that the operation of life-affecting AI must be “comprehensible to the people who use it or who suffer the consequences of its use”. This principle also raises the importance of knowing when a decision affecting us has been made by an AI.
Photo by Morgan Petroski from Unsplash
Solidarity, equity, inclusiveness
While Asimov proposed, at the top of his list of laws of robotics, the rule that “a robot may not harm a human being, nor, remaining passive, leave that human being exposed to danger”, the principle of the importance of the human being features prominently in reflections linked to the development of responsible AI. Thus, the principle of solidarity covers the importance for AI to “not harm the maintenance of fulfilling emotional and moral human relationships” and to be developed “with the aim of fostering these relationships and reducing people’s vulnerability and isolation”. AIs “must be developed with the aim of collaborating with humans on complex tasks, and should foster collaborative work between humans”. Human-machine collaboration, therefore, at the service of humans.
Here we come to one of the most important elements of the Montreal Declaration, that of equity, i.e. the importance of “not creating, reinforcing or reproducing discrimination based on social, sexual, ethnic, cultural and religious differences” and “contributing to the elimination of relations of domination between individuals and groups based on differences in power, wealth or knowledge”. AIs feed on our biases and preconceptions, but also on the realities and knowledge of the people who design them. They possess a “predefined representation of the world” (Silvestre Pinheiro, 2022) and often propagate standardized representations of the world, contributing to the invisibilization of many communities. This is where the principle of inclusion of diversity comes into play, which advocates that the development of AI should “take into account the multiple expressions of social and cultural diversities, right from the design of algorithms”. This means, of course, not only being vigilant about AI developments, but also setting up design and development teams that are themselves diverse, enabling, as far as possible, the biases of the algorithms to be countered.
Prudence, responsibility and sustainable development
All the principles described above could not be upheld without respect for another fundamental principle, that of prudence. This covers the importance for anyone involved in the development of AI to maintain a critical and attentive stance in order to avoid the possible harmful consequences of this AI. The Montreal Declaration speaks of taking into account the “potential for dual use (beneficial and harmful) of AI research”, and the need to be vigilant at all times about the dangers of misusing AI.
In the same vein, the principle of responsibility reminds us of the importance of AI not contributing to a “disempowerment of human beings when a decision has to be made”. Thus, “in all areas where a decision affecting a person’s life, quality of life or reputation has to be made, the final decision should rest with a human being, and that decision should be free and informed”. Here again, we easily recognize the dramatic human impacts that can result from the omnipotence of the machine in fields as wide-ranging as public administration, medical management and others.
Last but not least, AIs must be designed and managed with their impact on energy, biodiversity and ecosystems in mind, both in their systems and in the connected objects to which they are often linked. AI, yes, but “ecologically responsible”!
The years to come will continue to be disrupted, probably exponentially, by the interruption of AI in all spheres of our lives. It’s up to us, by ensuring the responsibility of those who have the power to do so, to remain vigilant in respecting these essential principles for the pursuit of ethical technological development!
Photo credit: Solen Feyissa on Unsplash