L'IA pour l'accessibilité de l'information : De la base à l'action politique

16 septembre 2024

De la base à l'action politique AI4IA Lien Spotify De la base à l'action politique AI4IA Apple Podcast link

En amont de la Conférence 2024 sur l'IA pour l'accessibilité de l'information et du lancement de la feuille de route caribéenne sur l'IA, Carnegie Council et le groupe de travail sur l'accessibilité de l'information du programme Information pour tous de l'UNESCO ont accueilli un panel d'intervenants divers sur l'éthique de l'IA et l'élaboration de politiques à l'ère numérique.

De la Jamaïque au Canada en passant par l'Ukraine et au-delà, comment les citoyens, les institutions civiques et les professionnels de l'industrie peuvent-ils travailler ensemble pour s'assurer que les technologies émergentes sont accessibles à tous ? Quels sont les obstacles les plus courants que les décideurs politiques doivent surmonter ? Et quels sont les principes que nous devrions tous garder à l'esprit lorsque nous envisageons d'utiliser l'IA et d'autres systèmes technologiques émergents de manière responsable ?

Pour vous inscrire à la conférence sur l'accessibilité de l'information, veuillez consulter le site : https://ai4iaconference.com/register-now/

CORDEL GREEN : Bonjour, bon après-midi, bonsoir à tous. Je vous salue chaleureusement de la part de la Jamaïque, en vos diverses qualités. Nous sommes réunis à un moment où l'intelligence artificielle (IA) rejoint le panthéon des grandes inventions humaines qui remodèlent durablement la société. Les inventions de la peinture, de la photographie, du cinéma, etc., sont des tentatives de l'homme de se reproduire et de reproduire ce qu'il observe et ce qu'il vit. L'intelligence artificielle générative nous aide à faire de même. En ce sens, elle est comme "du vieux vin dans une nouvelle bouteille", sauf que l'ampleur et la portée de son impact sont sans précédent. N'importe qui peut désormais fabriquer des images, des vidéos et des sons falsifiés à la vitesse de la pensée et si réels que nous adoptons par défaut des positions d'incrédulité et de méfiance à l'égard de ce que nos yeux ont vu et de ce que nos oreilles ont entendu.

Qu'est-ce que cela présage pour la société lorsque nous commençons à mettre en doute la véracité d'une expérience palpable et viscérale avec notre parent, l'enseignant de notre enfant, notre médecin, notre avocat ou notre conjoint, lorsque des informations critiques sont reçues des services d'urgence, telles qu'un ordre d'évacuation immédiat, une déclaration déterminante d'un aspirant politique à la veille d'une élection, ou lorsqu'il s'agit d'une nouvelle de dernière minute émanant d'une source médiatique présumée fiable. Si le mensonge et l'incrédulité atteignent un niveau tel que nous normalisons la fiction, la fraude et la méfiance, la société devient dysfonctionnelle et n'est plus qu'une simple carcasse.

D'où des exhortations telles que celles de feu le professeur Stephen Hawking, selon lesquelles l'IA représente une menace existentielle pour l'humanité, mais une menace n'équivaut pas à une issue inévitable. Notre choix n'est pas non plus binaire : utopie de l'IA ou dystopie paralysante de l'IA. Plus nous en saurons sur l'IA, plus nous pourrons façonner activement son développement et l'utiliser pour le bien de la société, mais pour cela, il faut de l'ouverture, de l'inclusivité, et pas seulement un accès, mais un accès éclairé. Les personnes de tous horizons qui seront ou sont affectées par l'IA devraient avoir leur mot à dire à ce sujet.

Tel est l'objectif de la discussion d'aujourd'hui. Il s'agit de l'un des huit débats organisés dans le monde entier, dont le point culminant sera le 28 septembre, lorsque nous nous réunirons sur la plateforme Gather Town pour la conférence virtuelle phare AI4IA, qui réunira plus de 68 orateurs du monde entier.

All of this information—from today’s panel, the others to come, and on September 28—is being made available as part of our collective task to eliminate information asymmetry and write instead hopefully a story about human flourishing. I conclude by thanking the members of the United Nations Educational, Scientific, and Cultural Organization (UNESCO) Information for All Programme (IFAP) Working Group on Information Accessibility, which I have the pleasure of chairing; the small band of AI4IA conference organizers from India, Jamaica, the United Kingdom, Canada, the United States, Chile, Ukraine, Germany, China, South Africa, and Botswana, supported by the UNESCO IFAP Secretariat in Paris, the UNESCO Regional Office for Southern Africa, the UNESCO Cluster Office for the Caribbean, the Broadcasting Commission of Jamaica, the University of Alberta in Canada, and the Alberta Machine Intelligence Institute (AMII). Most of all, I extend gratitude to Carnegie Council, our partner for today’s panel discussion.

It is now my pleasure and a great delight to hand over to the moderator, Ayushi Khemka. Ayushi is a Ph.D. student and Killian Laureate at the Department of Philosophy at the University of Alberta.

AYUSHI KHEMKA: Thank you, Cordel. Good morning, good afternoon, good evening, and thank you so much for joining us virtually at this panel discussion today. As Cordel said, I am Ayushi Khemka, and I am a Ph.D. student at the Department of Philosophy here at the University of Alberta.

Our discussion today is going to be on AI and information accessibility, where we will try to touch upon different parts of grassroots reality as well as policy action and I am hoping quite a lot of stuff in between given the diverse expertise that we have with us today.

With that let me quickly introduce our panelists. First we have with us Dr. Dariia Opryshko. She is a media law consultant and Philipp-Schwartz Fellow at the University of Münster in Germany. She is also a member of the Working Group on Information Accessibility with the UNESCO Information for All Programme.

Next we have with us Dr. Dibyadyuti Roy. Dr. Roy is assistant professor of cultural studies, media studies, and digital humanities, as well as the program director of the BA in cultural and media studies at the University of Leeds in the United Kingdom.

Up next we have Professor Geoffrey Rockwell. Professor Rockwell is a Canadian Institute for Advanced Research AI Chair and professor of philosophy and digital humanities at the University of Alberta. He is also a fellow at the Alberta Machine Intelligence Institute, commonly known as AMII, in Canada.

Last but not least we have Mr. Stuart Hylton with us. Mr. Hylton is the director of assurance and compliance services at Symptai Consulting Limited in Jamaica.

Thank you so much, dear panelists, for taking the time and being with us here today. We really appreciate it. I am quite excited for this.

Before we get into the questions I was thinking we could discuss a bit of how we want to structure this conversation so it is easier for our viewers to follow it. I was thinking we might want to start with me asking a couple of questions and asking each panelist to go one by one and respond to those questions, continue that for a bit, and once we are done with the questions I have for you guys toward the end we can open up the conversation to our audience and they can have the opportunity to ask their questions. I hope that sounds good.

With that, let me jump right into my first question. There has been a lot of conversation that we know has been going on around two major things across the world. Irrespective of where you are from two particular strands of conversation have been pretty common or consistent throughout, and those are AI and secondly climate action. What would be your personal response to the idea that AI as a general-purpose technology should be treated as significantly as we do the basic four elements of Earth like air, water, fire, and earth? As an additional question, I would want to ask: How, if at all, can we synthesize action around climate crisis along with human centricity in AI?

GEOFFREY ROCKWELL: I am going to begin by saying that I don’t think AI is a necessity the way air, water, fire, and earth are. I do not see it as something that is necessary for life. It may seem that way, given how much I use ChatGPT, but I can still remember when I did not have access to it.

The second thing, and I think everyone knows this, is that I think we have to acknowledge the environmental impact of AI, both the incredible amount of energy that is needed to train at least the large general-purpose models but even the impact of smaller projects. I think the International Energy Agency estimates that the amount of electricity that is going to be used by the AI industry is going to double by 2026. So this is going to have a significant impact.

The question then becomes, is there going to be a good, if you will, environmental return on the investment, or are these systems being developed to entertain us, to produce pornography, to harass people, and for misinformation? I think that is where policies, regulations, and, if you will, ethical direction come in, that we can use this technology in all sorts of ways to assist people who are working on environmental impacts or we can use it just to entertain ourselves as a new form of television or something to replace the decision making which we are tired of doing.

One example would be cryptocurrency mining. There is potential there for cryptocurrencies, but it also seems that it is being used in all sorts of inappropriate ways and consuming a lot of power. The experience I have working with my colleagues—I am not a computer scientist myself—is that the environmental impact is going to happen at small scale. I have two colleagues who are working on AIs to make water processing more efficient, to reduce the amount of chemicals that are used in water processing here in Edmonton and then in Alberta. I think we are going to see lots of smaller projects where in effect AI helps with the automation of things that have an impact on our lived environment and so on. I think that is where we want to direct our energy and support initiatives.

I am going to stop there and pass it back to you, Ayushi.

AYUSHI KHEMKA: Thanks, Geoffrey. I think that is super-important. In a lot of our conversations around digital we tend to maybe forget sometimes what is happening with the environment and what is the environmental impact.

DARIIA OPRYSHKO: Thanks a lot for the question. First of all, let me express my gratitude for having me here. It is a great honor to be here in such great company today. Coming back to the question, in my view we cannot equate air, water, fire, earth, and AI. I absolutely agree with Geoffrey that without air, without water, and without earth humanity cannot live. They are essential conditions for human life on our planet.

However, we have been living for some time without AI technologies. They of course will be of great importance for the development of humanity. The availability of those technologies or refusal, for instance, to use them will be probably the sign of backwardness. Those who use these technologies will undoubtedly have a competitive advantage in many, many public and social spheres, including the economy, military, and so on.

However, again I completely agree with Geoffrey that there is another side of using AI technologies and it is that they have a significant impact on our environment. It is very well known that the production and operation of all of the equipment in data centers as well as the use of artificial intelligence systems themselves contribute significantly to global carbon dioxide emissions and huge energy and water consumption.

Therefore, on one hand AI technologies contribute to the development of mankind. On the other hand they have quite a negative impact on the environment. For these reasons, the legal community is closely monitoring scientific research in the field of AI and its impact on human life and the environment and is beginning to introduce rules and standards aimed at and enabling the technology to develop in the best interests of society, in the best interests of humanity and on the other hand to minimize the risks in regard to the introduction of this technology into life.

For example, at the beginning of this year, the European Union adopted the so-called “AI Act,” and recently a number of states signed the first international treaty in the field of AI. Here it is necessary to mention that in the AI Act the environment is explicitly referred to as one of the “legal interests to be protected.” However, I think these are just the first steps in this area and it is obvious that in the future the legal regulation of AI will become increasingly developed and even more complex.

AYUSHI KHEMKA: It is interesting to know how legal actions and legal policies can help or not help in these kinds of conversations.

DIBYADYUTI ROY: First I want to express my thanks and gratitude to you and the entire panel. It is a great honor to be here at the Carnegie Council and UNESCO for organizing this very important conversation.

I am going to start by pushing back a little against the question itself. I don’t think AI is a general-purpose technology in its current formations, and I say that because this is not the first time AI has been hyped up. This is the third, if not more, hype cycle of AI.

Let’s go back to what led to the formation of the term “AI.” It goes back to 1956 when John McCarthy was trying to start the Dartmouth conference and he needed an academic term to get a grant, and that is how the term AI was born.

We have moved from symbolic, deductive logic to inductive neural networks, and therefore I think I want to go back to the document that all of us have looked at, “Recommendations on the Ethics of Artificial Intelligence,” that UNESCO came out with. It makes a wonderful statement in the beginning part: “We don’t want to define AI because we understand it is a very fluid term.”

I do want to come back to this fluidity to answer your question about air, water, fire, and earth. I am a cultural analyst of dominant and emergent technologies. The word “technology” comes from tekhnē, which is “making.” Air, water, earth, and fire are natural elements; AI is not. Therefore air, water, earth, fire, and AI can never be equal.

For me the key question here is not the technology but the human. For me to go back to ontologies of AI or the act of AI itself, what do we understand by AI beyond the Dartmouth hype cycle and beyond what is understood to be the large-tech versions of AI? I do think there is a time to think about what kinds of humans are we privileging in these conversations? Who gets to have the definition of being a human in conversations around artificial intelligence?

That is the key point that I want to bring back, which goes back to some of the conversations that Kate Crawford has been talking about when she talks about the “Atlas of AI,” the planetary costs of artificial intelligence. Look around. Dariia and Geoffrey mentioned the data centers all across the world—and this is true for every dominant technology; my Ph.D. was on nuclear cultures. All across the world you will see nuclear testing happening in minoritized lands where inhabitants have always taken to task for those dominant technologies.

So in many ways I end with very popular cultural reference. Who watches the Watchmen? Who watches the people who are making the AI? Who gets to ask those questions? I am so glad that the six of us here and more around your moderation, Ayushi, are able to ask that question. I pass it back.

STUART HYLTON: Good morning, good afternoon, and good evening, everyone. I am in complete agreement with the panelists. AI of course is not essential like water and air, all the essential stuff we need to live.

I do want to add a point around accessibility because while it may not be essential I think one of the important things is, are we making it accessible to the persons who want to access it, the persons who may be able to take advantage of what the models can produce, and a very important point that was added I believe by Geoffrey first, the sustainability around using these AI models? Are we tackling the challenge of making sure that whatever models are being generated and the energy output that is needed, the carbon footprint as mentioned by Dariia, are we addressing those things that are impacting our environment?

Key to me is making sure that we keep AI technology accessible. We in Jamaica are still trying to make sure that the Internet is accessible to more persons, just the raw infrastructure areas that allow people to try to create their own development in their own space, but once we tackle those things, making sure that the internet is accessible, making sure that all essential services, water, light, electricity, etc., are accessible, we are also providing a means for persons who do want to take advantage of these new models and technologies that they can access and use in a responsible way.

AYUSHI KHEMKA: As we were just discussing how AI is not like an element compared to fire or water and so on, I want to unpack more fundamentals of AI and the principles governing it. We were talking a lot about sustainability and accessibility and ethics of doing it as well.

If you as individuals in your own fields and whatever work you do, if you were to choose one particular guiding principle to govern AI and responsibly use it, which one would it be and why? I want to make it a bit more fun, which might seem a bit of an imposition right now, but we are going to do it. If one speaker says some particular principle then the other speakers cannot choose that. The idea is that we will have a good range of principles.

GEOFFREY ROCKWELL: I want to begin by saying a few things about principles. I want to say first of all that principles are not enough to guarantee ethical uses of AI. There has been an explosion of principles. There is a paper from 2019 with the title, “Principles Alone Cannot Guarantee Ethical AI,” which talks about 84 different sets of principles. So we have to ask, what can principles do? In some sense it is in the etymology of the word, principia, the start, but they are not in some sense the end.

Partly drawing on Dr. Roy, I am going to pick meaningful human control, which I think in the UNESCO principles is articulated as “human oversight and determination,” but I like “meaningful human control” as the starting point.

DARIIA OPRYSHKO: A good question because of course all principles are very important, although definitely they are not enough to guarantee that AI or the usage of AI is going to be ethical.

For me I believe that development, training, and fine-tuning of AI systems and models should be first of all and foremost transparent and explainable. Currently it is acknowledged by many academics and practitioners that these processes lack transparency, which is actually essential to ensure respect for human rights, including the right to privacy and data protection. Of course, the development and use of artificial intelligence should be responsible, and this means adhering to ethical standards and ensuring that the generated results do not reproduce gender stereotypes or discrimination or in that they do, proactively address them.

Another important thing that also goes to questions of responsibility and transparency is that the use of AI technologies should not undermine trust in science, education, and knowledge itself. There is currently a very active debate regarding the potential use of AI in scientific research. In March Andrew Gray from University College London published a very interesting article suggesting that at least 1 percent of all papers published in the last year, so 2023, out of 60,000 or more papers, were large language model-assisted. So the number of these papers could be even greater. The results of his research sparked discussions about the use of large language models in writing scientific papers.

One argument is that the general expectation is that scientific articles should be written by humans and not by machines. Another argument is that using AI models can be quite risky because they often produce very inaccurate texts, mixing words and putting them let’s say in the wrong order. In science this is unacceptable.

Therefore, I agree that, yes, we can have some principles, but everything depends on how a particular human, a particular scientist or whoever, uses AI technologies. The use of AI should be responsible and should not, in science especially, undermine public trust in science and knowledge. Therefore, I think that transparency and the responsible use of AI itself is the most important of these principles.

AYUSHI KHEMKA: So we have meaningful human control and we have transparency and responsible use of AI in that context.

DIBYADYUTI ROY: I had transparency and Dariia took that, but that is fine. I will start off with a story.

A few months ago, I think in April of this year, there were these shops in North America that Amazon has called Just Walk Out. The understanding is that in these shops you go in, there is no human being, you collect your objects, and then at the billing counter a bill is magically generated. It was realized that this magical generation of bills happens because there are thousands of workers sitting back in India who look at the surveillance tapes and look at itemized lists and actually generate that bill.

Why do I bring up that story? I bring up that story to say that in what we call an AI system right now—which consists of data work and model work—80 percent of the data work that runs the model is done from the Global South in countries like India, the Philippines, and Kenya, where data annotation and the manual annotation that runs Big Tech in the Global North is so strong. Mary Gray and Siddharth Suri have already termed this as “ghost work”—they have a fabulous book on this. There is an example of how ChatGPT’s data set is being cleaned by workers sitting in Kenya who work for less than two dollars an hour, and they see enormous toxic content which they are traumatized by, and there is no form of medical insurance or health that can actually address these issues.

Of course Professor Rockwell and Dariia also pointed out the idea of sustainability. For example, one normal Google search, by which I mean just an algorithmic Google search without AI power, takes one-tenth of the energy that an AI-powered search does. For me, the word bringing it all down is “accountability:” Who is accountable? At the end of the day who am I able to ask the questions to because algorithms and AI systems are not value-neutral objects. Let’s stop and first break that Wizard of Oz myth. There is someone behind the curtain, and usually it is underpaid Global South workers who are exploited, not unlike how call centers or sweatshops worked in the analog and digital pasts.

I will stop there. Accountability is my word.

AYUSHI KHEMKA: That is a wonderful one because sometimes so many things are being packaged as AI. As Dibya was saying previously, there is the hype train but it is just humans working behind it, and that is kind of sad.

Stuart, what would be your principle?

STUART HYLTON: Given that I have worked in the privacy space for quite some time a lot of persons would think I would go to that one first, but the primary principle that I focus on is inclusivity. We are a smaller region and sometimes data is hard to get, and at times the Caribbean region gets overlooked in terms of considerations for what is sustainable development and inclusive policies on a global level just because we have a much smaller population in the Caribbean as a whole. I do think there is an opportunity for governments and for technologies being built to consider smaller regions and how our diversity may actually include and improve what policies are being developed globally.

I look at the ability of AI technologies to look at these large models and so on and think to myself, Make sure they include our data as well. When persons are getting to see the benefit of artificial intelligence and what it produces, then it will build greater trust and allow us to tackle more problems because then we will be able to say, “Well, let’s make more data available given that we can benefit from these things on a global scale.”

AYUSHI KHEMKA: Thank you. What I could gather from everyone’s principles was somewhere around “even if we have principles, then how are we actually using them?” If we have some policies on paper, on paper we have a lot of big words, ideas, and concepts, but it depends a lot on how we are actually using them as human beings, and what are our interventions going to be, be it in terms of injustices across the world or be it in terms of how do we make sense of those particular principles that we are heading for.

This brings me to my last question for today, and then we can open it up for our audience question-and-answer round. My question is somewhat related to this: Given that our panel is called “From Grassroots Reality to Policy Action,” I would like to talk a bit about policy action now.

What would you consider as the most common roadblocks when it comes to policy action? I understand that not all of us here on the panel work with policymaking per se, but if you were to talk about it from your perspective, your discipline, or your experience thus far in whichever field, how does a policy action translate into a local reality? As we were talking about how there are issues around even internet accessibility and the digital divide, what is being sold as AI in the Global North is not actually AI but people being underpaid and exploited in the Global South to build that façade of, “Okay, this is actually AI.”

In that setup, given the kinds of inequalities that we have layered into the world and not only just Global North versus Global South but even let’s say within the South with South-South inequalities as well, where does an average citizen fall into this discussion? Does the policy roadblock affect the average citizen? How can we weave these three components together? Let’s start with Geoffrey.

GEOFFREY ROCKWELL: Thank you again for some great questions. I am going to bring up two issues that I think are important in policy. The first one is, I think we have to figure out at what level do we want policies. For example, San Francisco passes a regulation banning face recognition in the city. Is that the right granularity? Is it something that is important to a city, or did they pass that because they did not trust the state of California or the United States, the federal government, to pass regulations around face recognition? Are they to some extent trying to provoke a discussion, if you will, because regulations are themselves communicative acts?

So there is this question about the level and the granularity. I am not a political scientist, so I do not know what the answer is, but I think this is something we are going to have to struggle with especially since AI systems developed in other jurisdictions could end up having an impact on citizens in Canada. We are seeing right now that Brazil is struggling with X, how to ban it, and how they can regulate it in various ways. That is the first issue I want to talk about.

The second thing I want to talk about is agility. One of the emerging paradigms for how AI should be regulated is a response, if you will, to the problem of AI as a shifting target. You cannot exactly say, “Well, here are regulations that have to do with X, Y, and Z.” We saw in the European Union and we are seeing in C-27, the Canadian AI and Data Act, regulations that define the risk. They don’t define what AI is—well, they do have definitions, but they are fairly broad. In the Canadian Act, which is not yet law I should say, they talk about “high-impact” risks; AIs that are potentially high-impact will be subject to different types of regulations. This is an attempt to have an agile policy which can adapt as new uses or misuses of AI come out.

The problem with agility is that you have to have a knowledgeable civil service. You have to have staff that are part of whatever ministry that ends up regulating it that has the learning and the ongoing capacity to stay on top of what is happening in industry. I think this is going to be a major problem for all jurisdictions. It is a moving target, and you have to have the educated civil servants, who have the freedom and capacity to experiment with these things, learn about them, and then actually speak honestly about them. Thank you very much.

AYUSHI KHEMKA: Thanks, Geoffrey. Dariia, what are your thoughts on this?

DARIIA OPRYSHKO: Good question because I think I also bring two main aspects that may differ a bit from Geoffrey’s. We began this discussion with the fact that the development of artificial intelligence significantly influences ecology. As we live in a world with limited resources, there is always a question of how to preserve these resources and how to find a balance between the interests of the society in preservation of the environment and the interests of the society in the development of technologies.

We are only at the beginning of the creation of development of policy. Geoffrey also came up with the question on jurisdictions. It is a huge question, and currently there is no international treaty that would be binding for all states and would solve this problem. I think we will see how a lot of things develop in the future. It also connects us with climate change and environmental issues.

However, another not less important problem is connected with disinformation, which divides society. Actually in this regard a lot of examples can be cited. The example of Ukraine shows how destructive narratives are used for military purposes, and AI in this case unfortunately greatly facilitates the creation and dissemination of disinformation. For instance, many of you might know about the deepfake that appeared at the beginning of the Russian full-scale invasion showing President Volodymyr Zelenskyy of Ukraine allegedly calling on Ukrainian troops to lay down their arms.

However, there are many other examples because this year is a year of elections throughout the world, and many researchers note a huge increase in disinformation and deepfakes aimed at distortion of election results. According to a recent UNESCO global survey, around 87 percent of people are worried about the impact of disinformation on elections in their countries and actually believe it has already harmed their country’s political life. Actually we know that these AI technologies are often used to create disinformation and disseminate it as well.

Here is the challenge for the policymakers: What shall we do with that? One of the examples of such intervention includes Slovakia, where just two days before their parliamentary elections in 2023 thousands of users of social networks shared quite a suspicious audio file, which presented a conversation between a journalist and the chairman of one political party that took part in this election, but this audio was of such bad quality that experts were able to conclude that there was a high degree of probability that it was AI generated.

The problem here lies in the fact that this audio was disseminated during a so-called “election moratorium,” and it was very complicated for the affected parties to have some possible reaction on this. This is a huge challenge for policymakers and for the entire democratic world.

In the European Union policymakers tried their best and this year adopted the Digital Services Act, which is aimed inter alia on tackling disinformation online. In addition, in Europe there are some institutional measures, and here there is the European Digital Media Observatory (EDMO) and this institution brings together fact checkers, media literacy experts, and academic researchers. They collaborate with online platforms and media organizations, and we can see how they analyze and assess disinformation and narratives, how they develop, and how they involve AI technologies. Even considering that AI-generated content is very difficult to detect and label as such, EDMO created a pool of AI experts who could potentially be involved in addressing AI-generated disinformation, especially during election campaigns or similar critical periods.

I think the policymakers have to of course follow the results of the implementation of the Digital Services Act in Europe, of the activity in the sphere of research, and activity in the sphere of media literacy.

AYUSHI KHEMKA: Dibya, you are up next.

DIBYADYUTI ROY: I will keep it short because we have limited time and I want to hear from the audience as well.

I think those are all great points, but I do want to reflect on UNESCO’s document with its recommendations on ethics for artificial intelligence. It has this beautiful term, “systematic normative reflection.” That is what they want AI ethics to be. I think that is crucial because systematic normative reflection is part of policymaking and, as Professor Rockwell was pointing out, it is a great point about at what level. Is it policies at the state level, city level, or nation level?

The word for me, if I want to bring it to one word, is “techno positivism.” I think that is the greatest block for policymakers. By techno positivism I mean this inherent belief that technology will take us to something better and that we are all moving toward technology, and technology by its nature has some essential aspect about it that makes it better always. I also reflect on my former and current training as a digital humanist to also say that we do not use these digital humanities tools just because we want to; it is because the research question demands it.

I do want to come back and end by saying I was hearing this talk by Professor Abhijit Banerjee, the Nobel Laureate, who was having a conversation with another economic Nobel Laureate Paul Romer, and they said that the ratio of inequality in the world, where they takes the income of the top 10 percent and the bottom 50 percent, has entirely gone up beyond comparison in the current time. If AI has the capability to exacerbate that inequality, which it does clearly as we see all around us and in all of the examples that Dariia mentioned and all of the mentions that Geoffrey was pointing out, there needs to be an understanding that this is an entirely new context. We cannot apply rules from 50 years ago to the current context.

I think if you take the low-middle income countries and the least developed countries, we have to think about the appropriate application of AI. It is not a “one size fits all.” That is very important.

AYUSHI KHEMKA: Stuart, your quick thoughts on this, and then we can move on to our audience interaction as well.

STUART HYLTON: I will just pick right up from Dibya talking about the uses of AI. I understand the policy question. As we think about policy in developing countries like throughout the Caribbean, we must ensure that the policy ensures as best as possible that we are making sure we have that inclusivity I talked about before and accessibility.

For example, are we using the technology in a way that benefits society and can enforce existing policy? When we talk about the Disabilities Act here in Jamaica, where we want to ensure that everybody is included regardless of their disability, can it be used for the benefits of disabled groups who, for example, may need to do simple services like taxes and are not able to read the forms that are there. Can we use text-to-speech technology to allow them to hear what is there? Can we allow persons to build models that interpret dialects to enable more persons to hear and understand what information is available in these services, so as the policies get developed to enforce the use of AI in a responsible way that we are able to use them in a way that is beneficial to more people and that it does become an easily accessible thing to more people throughout society.

AYUSHI KHEMKA: Thank you so much, Stuart. This brings us to the end of the discussion component for our panel discussion today. We have quite a lot of audience questions. I am not sure if we will be able to cover all of them, but we are going to try.

Our first question says that they would love to hear more about possible actions toward accessibility and what our panelists envision in these areas for the near future. If there is anyone in particular who is feeling quite excited about this question, but if we can keep it really short so we can have more questions as well. We have so many visionaries here, I am sure we have a vision here.

GEOFFREY ROCKWELL: I think there are two types of accessibility. There is the accessibility of users to be able to use some of these tools. I think part of the success of ChatGPT was the fact that it had an incredibly simple interface. It was like one field. You had to type something in natural language, hit return, and there was some sort of response.

I think the accessibility that is going to be important is accessibility to the technologies of development, of training. To that end, a number of people have pointed out that especially for these large platform or foundation models you need massive amounts of compute, power, and things like that, and these are only available through a small number of companies that have the billions of dollars to do it. To that end, Justin Trudeau, the prime minister of Canada, has in fact announced $2 billion to go to compute so that we can have a made-in-Canada, accessible compute platform for people in industry and researchers who do not want to be doing this using Amazon, Microsoft, or one of the big compute platforms.

DIBYADYUTI ROY: I want to add to Professor Rockwell’s point. There is a difference between digital and digitality, which I think is very crucial to accessibility. Digital might be more global, and digitality is far more local. Going to Stuart’s point about how he is talking about that we need those questions to be asked in terms of a very local context. Again, accessibility is not a transcendental signifier here. Accessibility is always a local question.

AYUSHI KHEMKA: We have another question: “It can be argued that the internet is becoming more oligarchical every day. Can this be prevented from happening or continuing with AI realistically? If yes, then how, and how can the average person contribute to this effort?”

I saw quite a lot of nodding heads and smiles at this question.

DIBYADYUTI ROY: I think the internet was always a place of control. I love that essay by Tim Berners-Lee, which is called, “Long Live the Web: A Call for Continued Open Standards and Neutrality.” he talks about his original conversations around not the internet but the World Wide Web, because he makes that distinction.

I would reflect back on a scholar that many of us are quite fond of, Professor Alexander Galloway, who wrote Protocol: How Control Exists after Decentralization. I think it makes quite clear that the infrastructure of the World Wide Web, if you take the distributive compute protocol, internet protocol, and the domain name system, these are sectors of control, though the W3C, the Worldwide Web Consortium, is a private organization. Let’s not forget that.

Let’s not imagine that the web was always a great, autonomous space that suddenly became clustered with control with AI. I would like to believe things are getting better, but I am not entirely sure it has always been good.

AYUSHI KHEMKA: That is fair. I think it is always good to remember the history and trajectory of how things have shaped up.

I guess we have time for maybe one more question, and then I can hand it over to Cordel: “With the discussion on AI and information accessibility, I would like to hear our panelists’ thoughts on how AI may impact the traditional educational system, and which policies should be adopted to ensure the educational level is kept to a standard, or why not, or even improved?”

I think they are trying to ask a question around how AI may impact traditional educational systems and what sorts of policy interventions can be made to address if AI enters the traditional education system. It could be for good or bad, and what can we do about it?

DARIIA OPRYSHKO: For education it is of course a challenge. As we see the approaches of how we wish to educate our children or students they are changing. Once I read that I think in Lithuania in one of the universities they came up with an experiment and two professors of the law faculty made an AI-generated clone of themselves and trained this clone on their materials, articles, books, and whatever. Afterward, this clone was used by the students for assessing the tasks they gave to the students.

It was quite an interesting approach. I wonder what were their results. This case shows that the system of education is also beginning to change and will definitely change. As I have mentioned and all of us know the results that are generated by AI systems are often inaccurate. They just put some correct words in the correct order, but this does not mean that in the end the answer will also be correct.

I think we have to pay attention to this aspect and follow the results of such experiments to see how we can modify our system of education to face these new challenges and be more effective so that the new generation will be more effective.

AYUSHI KHEMKA: I think scientific inquiry and curiosity is always a good point to end a discussion on. We have come to the end of our discussion. Thank you so much, panelists, for taking the time and chatting with us. I hope the audience liked it, and I am sorry to audience members whose questions we were not able to pick up.

I will hand it over to you now, Cordel.

CORDEL GREEN: Thank you very much, Ayushi. Thank you very much to the panelists. It is very clear to me in listening to all of you that this question of moving from grassroots to policy that a point of departure was Geoffrey’s intervention that principles are not enough to deliver ethical AI. There is another critical P, which is people, to Dibya’s point. He did not say “human centricity,” but the question around AI and AI governance is also a question about what is the quality of being human and that we must keep that at the center of our interrogation including ensuring that people in the Global South—the question is not just about them being mere consumers in this new phase of development but also that we address the question of people being “digital scavengers,” which is certainly not going to yield a better world.

I understand from the conversation that the challenge to AI being a general-purpose technology is not so much in my view a debate about whether this is a general-purpose technology but a question of, what lens are we using, hype or aspiration? If we are to really be serious about AI as a “rising tide lifting all ships,” then let us remember the critical P, which is that this is about people, and in this regard people of the Global South should be a priority because accessibility is also about enlightened access. So let us all put our shoulders to the wheel and start to address critical needs such as digital media and information literacy.

Thank you so much for your invaluable insights. We look forward to the other conversations across the world—in China, India, Canada, Jamaica, and Colombia, which will all happen between now and September 27, and then we invite you to join us on September 28. It is a Saturday, but put aside some time to come on Gather Town for the flagship Artificial Intelligence for Information Accessibility Conference.

From Jamaica, big up on yourself. Thank you.

Carnegie Council for Ethics in International Affairs est un organisme indépendant et non partisan à but non lucratif. Les opinions exprimées dans ce panel sont celles des intervenants et ne reflètent pas nécessairement la position de Carnegie Council.

Vous pouvez aussi aimer

13 NOVEMBRE 2024 - Article

Une zone grise éthique : Les agents d'intelligence artificielle dans les délibérations politiques

À mesure que l'adoption de l'IA agentique progresse, il est essentiel que les chercheurs et les décideurs politiques s'accordent sur des principes éthiques afin d'éclairer la gouvernance de cette technologie émergente.

De gauche à droite : Eleonore Fournier-Tombs, Ambassadeur Chola Milambo, Ambassadeur Anna Karin Eneström, Doreen Bogdan-Martin, Vilas Dhar. CREDIT : Bryan Goldberg.

19 SEP 2024 - Vidéo

Débloquer la coopération : L'IA pour tous

À la veille du Sommet du Futur, Carnegie Council et UNU-CPR ont organisé un événement spécial sur les implications de l'IA pour le système multilatéral ...

De gauche à droite : Toni Erskine, Kenneth Payne, Brianna Rosen, Paul Lyons, Tess Bridgeman au sommet REAIM à Séoul, le 9 septembre 2024. CREDIT : Sommet REAIM.

9 SEP 2024 - Vidéo

Utilisation militaire responsable de l'IA : combler le fossé entre les principes et la pratique

Cette table ronde organisée dans le cadre du sommet REAIM à Séoul s'est concentrée sur la nécessité de développer une gouvernance internationale pour l'utilisation de l'IA militaire.