L'adoption des agents IA connaît une expansion rapide, en particulier dans les espaces multilatéraux et diplomatiques. Une telle intégration soulève une multitude de nouvelles questions éthiques et de nouveaux défis pour les praticiens d'aujourd'hui et les dirigeants de demain.
Dans cet « Ethics Empowered : Leadership in Practice » , Eleonore Fournier-Tombs, responsable de l'IA pour l'État de New York, anime une discussion sur cette question urgente.
BRIAN MATEO: Hello, everyone. My name is Brian Mateo, and I serve as director of education and special projects at Carnegie Council.
To begin, I would like to welcome you all to our sixth event in the Council’s Ethics Empowered: Leadership in Practice series, which convenes scholars and practitioners to discuss pressing moral issues, reflect on their careers, and offer insights to young leaders. In today’s panel we will be exploring the ethical questions and challenges relating to the use of artificial intelligence (AI) agents within multilateral and diplomatic spaces.
It is my pleasure to now introduce you to our moderator for this event, Eleonore Fournier-Tombs, chief AI officer for New York State. We are also honored to welcome our panelists, Alexis Crews, senior fellow at All Tech is Human, alongside Eduardo Albrecht, CEO and co-founder of Dublr.ai and professor at Mercy University. As we convene in Carnegie Council’s Global Ethics Hub in New York City, I want to welcome you again and will now pass the program over to Eleonore.
ELEONORE FOURNIER-TOMBS: Thank you so much. It is great to be here today. Before I start, I want to ask a question: Who here has used a generative AI tool? When you used that tool, how many of you used it as a thought partner—you ask a question, you have a creative idea, and you want a how-to list of how to implement or your idea or you want further discussion of the idea? That is my personal favorite use of generative AI.
How about for drafting? You have an assignment due, you maybe have it write a few things for you, you have a report due, you get an outline, a few ideas, some speaking points if you are giving a talk? Many of us.
The way that everyone here uses AI is the same way that policymakers and diplomats also use AI, so you can think about them using it to draft speaking points to the UN General Assembly, to summarize a report to the Sustainable Development Goals, or to edit a text because deliberations are happening in their non-native language. I have been teaching a lot in the last few years, and when students submit assignments that are written with generative AI you notice signs in individual assignments definitely, but I think Eduardo will agree with me that you actually notice a pattern where the same assignments are appearing over and over again, and you get ten times the same assignments with the same arguments written in a different way with the same examples and the same recommendations at the end.
That experience has made me think about what that means in multilateralism. The risk of having people use AI in an implicit, personal, intimate way without being intentional about it means that you have various countries now having the same position or having the same arguments. You also lose the cognitive engagement of policymakers and diplomats, who do not need to engage in the same way in political deliberations because AI has written their speech, it has written everybody else’s speech, and AI also summarizes the outcomes of those speeches.
That risk and that position making have made me think a lot. I have an article coming out with the Carnegie Council, and a lot of thought has been on what could the potential impacts of this be on multilateral affairs.
I want to talk about a couple of principles that are embedded in multilateralism. One of them is state sovereignty. State sovereignty is the right of each country to participate freely in the international community and, as long as they comply with international law, to be responsible for what happens within the boundaries of their own country.
The next thing is self-determination. Self-determination is when the people of that country are responsible, hopefully or in principle, for what happens within the boundaries of that country. In a democratic process that would be the case.
The last thing is authenticity. Authenticity is the degree to which a political representative or a diplomat truly is genuine when they are representing the values, positions, and goals of their country. I will leave you with those thoughts.
AI tools, as I mentioned, can be used implicitly, but they can also be used explicitly in global deliberations. Explicitly means: Is there a way for AI tools, instead of being used to replace our cognitive functions and replace our representation, to be used for moderation and to call on people who may not usually participate? Can it be used to increase the quality of global deliberation?
Today we have two very different perspectives on this, almost a debate—no pressure—divergent perspectives, hopefully complementary but interesting sides of the picture. I am very interested in speaking to our two very interesting thought leaders today because in New York State I am in the process of deploying a generative AI tool across 51 executive agencies and 130,000 people. It is very complicated, and I feel like having the opportunity to speak about what this might mean and what deploying AI in government and in international organizations might mean from a deeper, more nuanced perspective is needed. Thank you so much to Alexis and Eduardo for joining me today.
I will first turn to you, Alexis. You have argued notably in your recent TED Talk, which I recommend everyone watch, that tech companies are accelerating the erosion of democracy through the capture of minds and ideas. Could this be happening specifically when generative AI tools are used to draft political speeches and decisions, and if so, is there anything that can be done by multilateral organizations and governments?
ALEXIS CREWS: Thank you for that question, and thank you to Carnegie Council for hosting this wonderful event. I am excited to spar with you.
That is a great question. I think there is a difference between politicians and political figures using generative AI to create political speech and then using that same tool to create governance documents. The United Arab Emirates, for example, is employing AI to speed up deliberation, legislative reviews, and make processes faster. The same thing is happening in Albania, where they are using AI to make sure their national laws agree with EU standards. In some states we are seeing legislators speak openly about using AI to review documents.
Some of the dangers here, especially when we are thinking about political speech, when a politician or political candidate is writing something that could be filled with misinformation, something that is used to persuade a specific audience, and then you use that same tool, which has been tracking your patterns, habits, language, and the data points you are using, to create different articles of governance. We are seeing some instances of that in the current day, and that is because there are no true unified regulations around this, but on the global level you have so many different systems that people are using. There is no standard and no checklist to ensure that these entities are actually using ethical and responsible AI.
When I think about risks, I am thinking about invisible biases, hallucinations, legally unsound language that is delivered and smoothed out to make it seem like it is actually possible, and then political flattening, which is something we have seen more often with our more polarizing figures, but multilateral organizations can do a few things. I think they can require disclosure whenever AI is being used for drafting any type of political speech.
This is something that multilateral organizations can do. This is a requirement they can set. This should not fall on the companies to do. They can mandate human review for legal integrity and bias at every single step of the way.
We understand the political climates and the messages and nuances that these systems do not understand. It is beyond “human in the loop,” but it is mandating that there is a group of people to make sure that everything is being reviewed before it is shared publicly and then building an external audit capacity. We have external audits for different sectors but we actually do not have one for government usage for AI tooling in a way that works globally and matches what we are seeing in the West and the Global South, and if we had some type of alignment there, we could be able to focus on transparency and making sure that these tools and how people are using them will not be used to influence particular agendas. I think that is where we are heading.
ELEONORE FOURNIER-TOMBS: It is so interesting. I have a quick follow-up question: You mentioned political “flattening.” Can you expand on that?
ALEXIS CREWS: I wrote a definition down because I have a few different views of political flattening, and I think you will ask me a question where I use a different frame, but for me political flattening means “a distinctive judgment of elected officials that is compressed to be more digestible to broader audiences.”
Think of your most extreme politician on either side. We are thinking globally. Their message gets distorted in a way that makes it more powerful. Red flags that you would normally hear when you hear speeches are no longer there because the speech has been changed. That is how I would think about and define political flattening in this instance.
ELEONORE FOURNIER-TOMBS: And you think AI is used to do that on purpose?
ALEXIS CREWS: I think even if you are a low-skilled user, you can use AI to do that, of course.
ELEONORE FOURNIER-TOMBS: Eduardo, in your book Political Automation: An Introduction to AI in Government and Its Impact on Citizens, recently published by Oxford University Press and which I have at home, you argue that governments increasingly use AI in decision making toward citizens and as a democratic counterweight: “AI twins of individuals could be used to model policy preferences and impact and even increase representation in democratic governance,” which is very interesting and a little controversial. Do you see a similar model being possible in global governance, for example, in a general debate on the next round of the Sustainable Development Goals?
EDUARDO ALBRECHT: That is a great question, and I will work toward it, but first I want to say thank you to the Council for having me, and it is a pleasure to be stage with both of you.
While I was preparing for tonight’s conversation, I obviously logged onto Claude.ai and put in the prompts, then I said to myself: “No, wait. Maybe this is the time to not do it.” I had this image of organic food. I don’t know if that is a metaphor that can work, but I said, “No, I want this to be organic.” So, just to let you know, I did not prepare through AI. This is organic.
ELEONORE FOURNIER-TOMBS: This is a non-AI panel.
EDUARDO ALBRECHT: Which means it might be a bit rough around the edges.
Alexis, I like how you talked about political flattening, and I believe that is in fact the risk, but there is so much more behind that term that we need to unpack and understand. One danger is that there is a kind of cognitive convergence. As we ally ourselves with these machines there is an overlap in our thinking and their thinking, and this means that everybody’s thinking is now becoming convergent, in a way like my thinking is similar to your thinking, et cetera. That is a real risk obviously for “Where does thought lie?”
But there is also the issue of hidden biases because those machines are full of biases. You just cannot get around that. All thought is biased essentially, so not only are we converging, but we are converging around the same biases.
On top of that, especially when you look at the way these machines are used in politics—and I call them political “machines” because I hope it gives you the right mental image—is that they are making decisions. At the end of the day, it is not just writing speeches; it is making value decisions around what is right and wrong, and those decisions have consequences on our real lives. The risk is real.
When it comes to the erosion of state sovereignty, as you correctly described, Eleonore, the real question is: How accountable are the political systems now to citizens? If in fact politicians or bureaucrats are making decisions in this convergent and biased way, do they lose their function of accountability to citizens? Are they no longer accountable to citizens’ real preferences and views? That is a crisis that we are seeing at different levels of governance.
My concern is, what is happening to citizen voices? I am an anthropologist, so at heart I am curious about people’s opinions. I believe the beauty of opinions is that they are different, sloppy, and messy, and that the deliberative process includes a lot of back-and-forth and different ideas, so what is happening to citizens’ voices with the unleashing of these technologies?
But I am also an activist at heart. I believe that citizens must mobilize, otherwise political systems will atrophy and our rights will be eroded. That is why the book lands on the slightly provocative idea of finding ways to include citizen voices in this new reality of political automation.
This is where it gets provocative, so please bear with me. If in fact politics is being automated, does it not make sense that we automate deliberation in one way or another? Maybe even that process of discussing and deliberating as a society or a community can also leverage AI. This is where I land on the idea of “digital representatives” of ourselves in the political debate that can somehow represent our preferences as individuals.
The company that I have co-founded, which is called Dublr, is working to do just that. We are creating synthetic samples that represent citizen voices, hopefully in a faithful and representative way, and the key idea here is that citizens should be included in that process of calibration of these synthetic samples, so that it is not only some company that is building synthetic samples to represent us in this deliberation but there is also an institutional conduit through which we as citizens, as a community, can in fact own those synthetic samples and what they say and don’t say.
ELEONORE FOURNIER-TOMBS: Do you see that also being possible at an international level, and what would that look like? Would it be the preferences of a country or the preferences of big groups of people, for example?
EDUARDO ALBRECHT: The book breaks it down by levels of governance. Governance occurs at the local, state, federal, national, regional—European Union—international, and multilateral levels, like the United Nations. All these levels link up, and there are many different ways in which they link up.
The fascinating things we are seeing are happening mostly at the local, municipal level. There is quite a lot of experimenting with AI deliberation and leveraging these technologies to better amplify citizen voices and using those voices in decision making at the local level.
I think these types of institutional transformations occur over years if not decades and that we will see a linking up of different levels, especially since the innovation is happening at the bottom, at the local level, and at the top. The United Nations is actually quite innovative when it comes to considering how these technologies can be useful, so I imagine the innovation might come from the top and bottom to eventually permeate the entire system one way or the other.
ELEONORE FOURNIER-TOMBS: Do you want to respond?
ALEXIS CREWS: I would love to. We are seeing examples of AI being used in deliberation in I believe Chile and Taiwan right now on a national level. I think in Chile recently over 500 participants were able to share their views and do some type of voting using AI tooling and summarization, et cetera, around pensions and healthcare.
In Taiwan they were able to get thousands of people to share their inputs on upcoming regulations. Those are just two examples, and I am sure there are many more examples where it could rise up to something that a multilateral organization could use in a few years but maybe not now, but the idea of digital “twins” is very interesting.
ELEONORE FOURNIER-TOMBS: Not to get too sidetracked here, but one of my former colleagues at the United Nations University Centre for Policy Research (UNU-CPR) wrote an interesting report that I love about “hidden voices” in Sustainable Development Goals, and he categorized the kinds of people who most benefit from things like global goals and never participate. I find that framing very interesting in terms of how to get those voices to participate or be heard in global deliberations.
Let me ask one more question to Alexis: The foundational principle of global deliberation, particularly since the United Nations was created, is enabling peaceful relationships between countries. In your most recent Substack, you argue that we are entering a phase in international warfare where the systems of daily life, such as power, water, hospitals, payments, and logistics, are now part of the battlefield, and AI agents specifically are increasingly assisting attacks to critical systems such as hospitals and energy infrastructure.
The United Nations has put in place some “light-touch” global AI governance frameworks, mainly because of the relationships between countries and what is possible for them to agree on right now, including a newly launched international scientific panel on AI, which will share its report in July at the AI for Good conference in Geneva. How do countries build accountability frameworks for a technology that changes faster than any treaty process was designed to handle?
ALEXIS CREWS: That is such a good question, and a hard one, because the technology is rapidly evolving. Any frameworks we have seen, by the time we get there, are outdated, and I think that is a known known.
When I think about potential attacks on critical infrastructure, which is a global issue, carried out by bad actors, irrational actors, and actors who have access to a ton of resources and do not play within the normal political sphere, so they are not bound by the same ethics and principles that you would think nation-states are traditionally bound by, and that poses a greater threat because you have certain countries targeting critical infrastructure. It is what we are seeing in certain countries in active warfare.
I think setting up frameworks that think about verification architecture is important, and we have already seen some of this verification architecture in arms control. That is logic that is built on observability, auditability, and traceability, and that is something we currently do not have when it comes to chips with import and export. It is also something we currently don’t have when it comes to the usage of AI agents and generative AI being used by nation-states as well.
The other thing is mandatory incident reporting for agentic systems. Right now any type of reporting is voluntary, and we have certain companies that publish model cards and talk about some of the instances where bad actors have tried to use their systems to carry out X, Y, and Z. Anthropic recently published a report that went through some exploitative use by hackers as a good example, but that is all voluntary. How do we bake that into a framework that can be used across the West and Global South and honestly just make sure it underpins all our governance systems when we are thinking about this specific area?
Then, there are red lines around human oversight for agentic and gen AI use. I work in responsible AI, so a lot of this comes from responsible AI frameworks but also the implementation of it because it is one thing to have frameworks, but it is another thing to implement it to make sure that we are preventing any type of attacks, and these are small things. They sound big, but if we are thinking about traceability, we know how to do that; if we are thinking about auditing, we know how to do that. This is something that is industry standard for financial institutions. If we take playbooks from other types of industries and bring them over to how we are thinking about governance for AI, we can build out the frameworks to prevent critical infrastructure being attacked by these bad actors using generative AI and agentic AI.
ELEONORE FOURNIER-TOMBS: From your perspective it is also making sure that when AI is used in critical infrastructure that it is done in a safe and responsible way, but on the other side we cannot control as much how AI is used by bad actors.
ALEXIS CREWS: Exactly, and because we cannot control that x factor, then what do we do to ensure that we have everything in place to make sure our critical infrastructure technology is up to date? A lot of critical infrastructure in the United States, for instance, is not updated in terms of the technology, so there are a lot of gaps there. How do we make sure that they are operating at a level where we can actually offer a defense?
I think these are frameworks that any nation-state could benefit from. Some nation-states are bad actors themselves as well. Sometimes we can predict when certain countries will target infrastructure based off of what is happening globally, but I do think we can do all of this in coordination.
ELEONORE FOURNIER-TOMBS: You mention also the question of implementation, and I think that is interesting because we are at a time now when we are moving from principles to implementation. We have had several of these conversations over the last two years, and I know the first one was about ethics and principles and saying, “We do not yet have policies in place the way we would like to have them.”
Moving from the principles perspective to real implementation is very challenging. I will give an example for New York State: There is an acceptable use of AI policy for the government in that there are several principles: Human oversight is one of them, so no decisions can be made autonomously by any agency without human oversight; there has to be transparency; there has to be equity and fairness; and there has to be cybersecurity and intellectual property protection. That is great, and that has been happening since 2024.
What we have to do this year is say, “Okay, your agency”—say, the Department of Motor Vehicles—“wants to deploy an AI tool. How do they get from these principles to actually doing the oversight?”
In reality it is not very glamorous; it is just a bunch of checklists, questionnaires, going through, getting proofs, checking your models, and making sure it is very systematic. Then we have to figure out a platform so that every agency can report back every single check and balance that they did, put it in a dashboard, and hopefully the chief AI officer can see where high-risk deployments are of AI in the state. This is ideally what would happen. It is a lot of work actually.
ALEXIS CREWS: It is so much work. One of the things I would love to see is a “center of excellence” (COE), where people who went through AI training safety, like AI ethics and responsible AI training, are appointed to the COE, and they are the ones in charge of ensuring that everything is compliant and everything matches the value proposition. There are a lot of checks and balances. It is a lot of checklists.
All of that ensures that nothing bad happens on the back end. If you start off with a vendor checklist—let’s say you are interested in onboarding Anthropic, as an example: What does your checklist look like before you start integrating the system into your company or government? That takes a lot of work, but once you have that in place it works wonders, but it does take time, but it does not take six months; it takes like two or three weeks, and once you get that done then you are in a better space. That is how we think about cybersecurity. That is how we think about protecting data because data leakage is also something that is big, and data protection is something that we are not talking about but something that should play a role in how we think about AI governance.
ELEONORE FOURNIER-TOMBS: I find it very interesting that some of the solutions to these big, big questions of the erosion of multilateralism are actually very detail oriented and very intentional, time-consuming, and specific, and the more we pay attention to what is happening with tools the more successful we could be.
Eduardo, you mentioned earlier that you recently created a company that seeks to provide governments with decision support through AI modeling. Alexis has concerns about Big Tech capture of decision making and democracy. How would you address those concerns so that the AI tools can be used to enhance democracy and multilateralism rather than the other way around? I know you touched on it a little bit, but I think we can go in more depth in terms of how these AI tools can be used to enhance democracy.
EDUARDO ALBRECHT: That is a tough question. The company, by the way, has just started, but the impetus behind it is that governments are going to be using these technologies, governments are using these technologies because they are so efficient and available. As governments increasingly make use of, for example, synthetic samples or digital twins of populations to better—hopefully—provide services to those populations, the question is, as I mentioned earlier, what kind of input do citizens have?
What you two were describing earlier is fascinating to me because it is the nitty-gritty of how it works. I know that about the abstract level, but it is fascinating to hear what is happening in reality.
It feels like we are moving from principles to policies, and that is very encouraging. You can see how people are starting to adapt beyond just ethics and actually start doing stuff, whether it is checklists or the different ways you can audit a tool before you use it.
What I am looking at is, what comes after policies? My idea is that institutions come after policies. The trajectory is first ethics, then policies, and then eventually institutions emerge. That is the way it always works.
Those institutions are important for us because institutions represent us as citizens, so what new institutions might emerge out of the dynamics that you have been describing over the next several years? One of the institutions that might emerge is some way of reconnecting these tools to populations. That is where these synthetic samples come in.
If these synthetic samples are representing populations to governments, where do citizens fit in? The danger is that they are cut out, and I think that is the danger that Alexis is worried about. Institutions will be the only way you can reintroduce citizens into that conversation.
Beyond principles and policies you need institutions. I like to describe them as a “conduit.” Think of it as a way we can tether our individual preferences and ethics as individuals, our politics, to this new state apparatus that is being created using AI.
What those institutions look like I think is a fascinating question, and we will see, but this company is trying to do that by providing an option to governments that not only creates the synthetic samples but also connects the synthetic samples to populations.
ELEONORE FOURNIER-TOMBS: Can you explain a bit more what you mean? Would it be an additional agency, for example, in a government at all levels that would be responsible for digital deliberation or AI-driven deliberation and would inform decision makers based on the outcomes of those deliberations?
EDUARDO ALBRECHT: Exactly, obviously leveraging all sorts of AI technologies and virtual online environments. Imagine it quite simply as you as an individual being able to know what is happening and being able to know how you are represented: What kind of data does the government have about you? How is this data being used? What kinds of decisions is the government using AI to make toward you?
In the book I argue that we need to have a “right of access” and a transparency that will allow us to know what is happening. That is a first step. This could be an app on your phone. I have no idea; it could be many different ways.
The second and more important right is: Who is doing the thinking? Earlier we talked a little about the cognitive overlap between ourselves and machines, whether it is politicians using these machines to make decisions at the multilateral, state, or local level, and being able to clearly say, “That is not what I am thinking,” a kind of right of thought.
We have freedom of speech; imagine a freedom of thinking. That is going to be an important issue that we figure out from a legal perspective. You have to be able to say, “That’s not my thought.” Those are some abstract ideas of what it might look like, but I think it is up to all of us and history to see how it pans out.
ELEONORE FOURNIER-TOMBS: In a way it is almost like a semi-participatory democracy, where people are represented or modeled on an individual basis and then policymakers are testing policies with these twins, but citizens are actually accessing the outcomes of those data too and can call out if the models are not truly representing their perspectives.
EDUARDO ALBRECHT: Absolutely.
ELEONORE FOURNIER-TOMBS: What do you think, Alexis?
ALEXIS CREWS: I am not quite sold. It is very interesting.
Two things come to mind. I know I talked about flattening earlier with respect to extreme politicians and their views being flattened, etc.
Now I am thinking about marginalized voices: How do you make sure that communities that do not speak one of the 50 languages that these models are trained on are represented? How are we accounting for that?
I am also thinking about government access to data. Are we sure that citizens want their data used by the government in this way, or is this something where citizens are opting into a different platform? That is where my mind goes.
EDUARDO ALBRECHT: Very valid concerns. On the language issue, research has demonstrated that these experiments actually help include individuals who do not speak majority languages because AI tools can in fact quickly translate from one language to another. Let’s say in a community in which a deliberation is happening in English people who speak Spanish are cut out of those conversations. This would potentially help in that sense.
About the issue of the government having too much data, I argue that it already has too much data. I wish we could stop governments having too much data, but that is a different conversation around surveillance and data rights, but this would not add more data for the government; this would just be a conduit, a way, through which the individual can know what data they have and be able to influence that data, whether it is used or not, and what kinds of decisions can be made toward an individual.
ALEXIS CREWS: That is helpful. The Mozilla Foundation is creating a data set of almost-forgotten languages to continue to expand on language capabilities for models.
ELEONORE FOURNIER-TOMBS: We are nearing the end of this part. It went very fast.
When I think about what you both have said, and going back to the original question of the potential erosion of multilateralism and democracy due to all the uses of AI that are merging everybody’s perspectives together and we have a lot of marginalized groups that are no longer included or are even less included, this weird space of whether human beings are really debating, participating, and represented, I think both of your perspectives are about leaning in.
One is the importance of the risk assessment, disclosure, and being very intentional, and I think something very simple like acknowledging who uses AI, making sure that we have thought it through, that you have had a conversation that is disclosed to at least the political group or the Permanent Mission and discussed very deeply, that people are very trained, and that thorough risk assessments have been done. That is one perspective.
Then, Eduardo, what you are saying is to go even further and experiment basically and see if AI could give us capabilities that would potentially do things that are not possible otherwise, so instead of just thinking about AI replacing our cognitive functions or the cognitive functions of policymakers and replacing the nuance that they might be able to bring to a discussion or decision, you are saying could we look at what is not possible now in a deliberation such as the inclusion of every citizen or every person or resident in a country or every interest group in an international organization, and could we think about it that way in order to potentially have better outcomes?
I think those are topical and important perspectives. Eduardo and I have had so many discussions on this that I always find extremely interesting because it is so different.
Alexis, I like your perspective also because you bring so much experience and so many different examples in the importance of responsible AI and how massive a task that is for people to put into action.
Before turning to the audience for questions, I will ask one last question: Do you have one or two resources that you would recommend for the audience or anyone who wants to explore this topic further?
ALEXIS CREWS: The Organisation for Economic Co-operation and Development put out a report in 2025 about generative AI in democratic spaces. I think everyone should read it. It is not West-looking, which I found helpful. It looked at Latin America and parts of the Middle East and North America in terms of how they are using AI, which I think we should start thinking about as we are moving in a space where a lot of innovation is happening outside of the United States and we have more innovation and great use cases that we are seeing outside of the West.
Also, there are many podcasts that I listen to, so you can find me later and I can give you a list so you can stay up to date.
EDUARDO ALBRECHT: I would echo Alexis in saying that there is so much happening outside of the West and in the Global South. I recently joined the Africa Research & Innovation Centre as a board member, and things happening in Kenya—this is narrowly based—or in Africa are quite fascinating. I found while researching the book that some of the most innovative thinking is actually coming from the Global South, in particular from Africa. Some call it “leapfrogging” when you skip a certain technology and then have a fresher look on what is happening now, but I find it also from a creative perspective around the institutions and around how to govern AI the work that is coming out of Africa is quite interesting and innovative.
ELEONORE FOURNIER-TOMBS: Wonderful. Thank you both. We are going to turn now to questions. We will start with online questions.
ALEX WOODSON: Thank you. I am Alex Woodson from Carnegie Council. There are lots of great questions in the chat. Here is one to get started: “What groups might need to work together to build a comprehensive standard, maybe one that works across the board, or does it need to be per sector?”
ALEXIS CREWS: I think it needs to be across the board. If you are looking at the pharmaceutical industry and the financial industry, they are already heavily regulated globally, so they have a lot of standards in place, but what is missing is something across all sectors, and I think that would actually be the most helpful way, something that supersedes anything that nation-states pull together.
ELEONORE FOURNIER-TOMBS: We will take some questions here.
QUESTION: I have a question about what you were talking about with AI being representative of citizens. I want to get your thoughts about how that would not only address injustices that are already happening with representation of people as minority groups, people without internet access, or people who don’t have even a computer or phone or don’t have access to that kind of technology? How would you bridge the gap between people who are already not being seen enough and emphasizing the voices that are being seen enough?
EDUARDO ALBRECHT: Thank you for that excellent question. The real danger of using these machines and tools, whether at the multilateral or local level, is that decisions are made in a way that is further and further away from people’s real problems and interests, almost as if you are creating a kind of fantasy world of what people think and do that is actually happening.
The risk is that governance as a practice becomes untethered from the populations, and we see that risk is now increasing. Most of my book actually looks at the dangers of this decoupling.
To bring it back to global ethics and the multilateral system, that is a real risk. Imagine diplomats globally discussing and debating in a way that is untethered from the population’s real needs. It becomes an issue of accountability and authenticity, and it can even be dangerous because maybe decisions are made that are completely wrong and harmful while the politicians think they are the best decisions.
That is what I want to address, and I want to address it by leaning in, as Eleonore pointed out. The best way I can think of leaning in is to say, “All right. What machines are being used and how, and what are the ways in which different pockets of the population are being excluded or not being excluded?”
It turns out that there are technologies and tools out there that can amplify those marginalized voices. The key again is to be able to have some form of inclusion by either representatives of those populations or those populations themselves. That form of inclusion, I argue, has to be guaranteed by an institution. What that institution looks like again is not my job, but we must ask for institutions.
Technology is always harmful, whether it is the Industrial Revolution or this AI revolution. That is the way we operate as humans: We unleash the technology and see what happens; we pollute the rivers and then clean them up later, but we always clean them up by creating an institution that cleans them up. That is why I am arguing for institutional reform.
QUESTION: Thank you for a great talk. My name is Eli Karetny. I am acting director of the Ralph Bunche Institute. I teach at Baruch College, and the papers I get are some of the best undergraduate papers I have had in many, many years, and yet they all sound the same. Eduardo, you talked about cognitive convergence, and I am seeing that in the classroom.
When students are pressed on this, it corresponds to an adamance: “Yes, that’s what I thought. It was said differently by the AI. It helped me rhetorically enhance what was my thought. This was my thought.”
Alexis, you talked about political flattening. I wonder if there is something like the opposite happening at least at some levels, where there is a rhetorical enhancement going on that corresponds to a loss of critical and independent thinking. We talk about Big Tech “capture”: Is there a machine capture of our thinking happening across the board at all levels?
Eduardo, when you talk about the right of thought, that is a great idea, but what does it look like?
EDUARDO ALBRECHT: Actually the United Nations has some thinking around that. Our legislative systems and constitutions all enshrine to different degrees freedom of speech because it was assumed that you could not get into people’s heads. Those laws were written when it was not possible to think for someone, in a way. That is why I argue that we need to look at that as a possible law we want to enshrine somehow in an updated constitution.
ALEXIS CREWS: Great question. Social media is a reflection of the world that you want to see, and the more you engage with AI the more it gets to learn your thought patterns, your habits, how you write, how you speak, and your greatest fears. It depends on how you are using it, but then it becomes an echo chamber. You end up only relying on AI to feed you information that double-clicks on your thoughts. It is very rarely that it will push back unless your thoughts are suicidal or you want to pursue creating biochemical weapons, which are extreme cases, but what we are losing is autonomy, agency, and our own way of thinking, and we are falling into groupthink that is being fostered by the different large language models (LLM) that we are using.
I did not use a computer until I was in third or fourth grade, so you have a new way students are using technology and only know technology and only know this way of thinking. The answer is, yes, but also there is no reason for the technology companies to change that model either because that is how you get more engagement, that is how you buy more tokens. Eventually OpenAI will have ads on their platform, something that they are piloting. It will never change.
QUESTION: Hello. I am Jennifer Barnes-Balenciaga. I am actually a graduate coming up this spring of Baruch College. I went to Borough Manhattan for my Associates in political science and sociology double major and a minor in law and policy.
With all that being said, I have a question about the opposite honestly of what we are looking at in schools, the use of AI. Instead of running away, is there a possibility to embrace AI and perhaps have the teachers have a way of utilizing this platform and the information that is being input into it and perhaps regulated in order for students to use it as a tool and be able to express the differences between what is coming out from this system and also the rhetorical way of what is trying to be taught within these institutions and classrooms?
ELEONORE FOURNIER-TOMBS: I think so. Everything that you said, Alexis, I agree with. I think there is a lot of risk when it comes to AI in our thinking, but I also think it is important to learn how to use those tools.
I have kids in school right now, and they take digital literacy classes, and their level of understanding of what happens on the internet in sharing pictures, for example, and not believing everything that you read online is so much higher than mine was when I first started going on the internet. I think it is important to think about how we want to learn how to use those tools.
In education, going back to this idea of where we don’t have the capacity, I think we need to move away from anything where we are just replacing functions that we can do perfectly fine or much better without AI but thinking about what is missing. In education one thing that would be missing would be educational resources, and some of the things are not incredibly complex.
If I am a teacher, can I use AI to generate an activity or coloring or something for my students to bring to light a concept I am teaching them that would be difficult for me to find online? There are websites where you can get teaching materials for K-12. It might be hard, you might have to pay, and you might not have the resources, so you could use AI to do that, for example, or you could teach students to use AI tools to do some research or develop their critical thinking by going back and forth.
Some of the issues with the echo chamber can be resolved by using different kinds of tools to do more research and having tools trained on specific data sets and trained to reflect not what the user is telling them but actually what the curriculum is. I won’t go into too much depth, but I think there is definitely potential there if we continue to look at where the gaps are.
Always in my work what I focus on is AI for good, where there are so many gaps in our delivery of “good,” whether it is at the United Nations or at national or subnational levels, and where all those gaps are happening where people fall behind and are not getting access to humanitarian aid and social services, and then where can we potentially deploy AI tools to fill that gap.
QUESTION: My name is Juan Acevedo, also from the Ralph Bunche Institute. My question is about hierarchies and the hierarchies that AI can produce.
Currently if we look at global governance and multilateralism, the hierarchies are basically created out of procedural expertise. Those Permanent Missions that know the rules of the game are the same ones who that produce more influential outcomes. Let’s assume that AI is an equalizer, giving access to small nations to produce important global legislation.
How do you see the future? How do you see hierarchies emerging out of that? If it is possible that then, as everybody is doing the same thing through AI, maybe there are some other features that become more valued.
EDUARDO ALBRECHT: That is interesting thinking. I would like to start by putting AI in its place. We talk way too much about AI. We have to remember that AI is the latest in an arc of transformations that started with the digital revolution. When you were able to digitize information and then put this information online on the internet, which then became the cloud, and you remember Big Data, and now the latest fad is AI, and it will last as long as it has to.
But if you look at this arc over the past 50 years, what it has created is a kind of “datasphere” that lives alongside all of us. It is right here in this room, it is in the air, in our phones, everywhere, and this datasphere is pregnant with information. That information has a huge democratizing impulse that it can give, and this I believe will disrupt those hierarchies that you are talking about.
You talked about the classroom and the college student. Frankly my students are getting really smart. My kids are really smart, smarter than I was at that age, and it is because they have access to that datasphere. All that knowledge is in this plane, and it is alive. It is not just a library where you have to pick up a book. It moves, it does its own thinking, and it produces more knowledge. That datasphere is incredibly revolutionizing, and it will disrupt power hierarchies.
The question is this: What happens if, instead of being democratizing, you find certain corporate actors who try to stake out certain portions of that datasphere territory, defend it, and make it theirs. They will obviously have the power to ascribe values to things, like, “Thing A is good; thing B is bad,” and even how people then access it where they only see thing A represented as bad or good. We need to find ways to all remain as citizens, whether it is at the individual or community level; or as companies, organizations, or academics; and then participate in that datasphere and keep it free and alive somehow.
ALEXIS CREWS: In a perfect world infrastructure has been built, everyone has access to the same types of models, and there is not a big disparity between the West and the Global South, which I hate calling it, the different markets that do not necessarily have infrastructure because of X, Y, and Z.
I actually think there is more room to build, grow, and have consensus. If we are trying to get everyone onto the same playing ground, then that means we can address other issues. I am thinking of the global health crisis, the climate crisis, and using these tools—yes, data centers help create the climate crisis that we are moving into—to find solutions because we will all be operating from the same playbook. That gives us more space and hopefully time to be creative about the solutions that we need to protect the next generation.
That is my hope, that we all end up globally on the same playing field, and that takes a lot of different factors into account. That means that middle powers maybe have played an outsized role in development in certain regions, and this way they could level up. It also means that technology companies are no longer just exporting Western values to different regions and that is the only AI that is being used, so it is extraction instead of importation and collaboration.
It also means that we are ensuring that everyone is at the table and is being reflected in the AI that is being used because now it is being unevenly weighed. If we can get to that ideal state, I think we would have more room for solutions and we would move faster. Even with human review we could move faster and stop spending weeks on deliberations, for instance, and move faster with writing policy and getting things off the ground.
ELEONORE FOURNIER-TOMBS: One last question and then we will wrap up.
QUESTION: My name is Carolina. Thank you so much for the talk. I know you both have different types of views on how AI must be employed. Do you see a legitimate role on using AI to prioritize cases especially in reporting mechanisms or just categorizing? I know we say, “Oh, let’s keep humans in the loop,” but sometimes a decision during a crisis needs to be fast, and AI could potentially help, especially in places where there is not enough infrastructure or personnel to address it. How do you feel about AI categorizing and prioritizing topics and issues?
ALEXIS CREWS: It is already being used. I feel that if you train it with the right classifiers, then I am okay with that because decisions do have to be made, but it is already happening. I think it is based off the training data and the safeguards that a company has put in place to ensure that the system is focusing on the right types of priorities in terms of harms and different things. That might not be satisfactory.
EDUARDO ALBRECHT: I am happy that you ended with this question because while I was walking over here, preparing my organic talk for you all, I said, “Let me have an original idea emerge,” and I had one. It is this: Imagine if you had an LLM that did not have just a human in the loop but had a community in the loop. Imagine if everybody had access to this LLM and how it was trained, with very simple mechanisms where you could say, “These are my political preferences,” such as, “I love gun rights” or “I love Mamdani,” whatever it is, you could conversationally give your whole worldview in a five-minute spiel, and then it took your ethics alongside everybody else’s ethics, and in some way that LLM became your community LLM, which you could rely on to more or less represent the collective preferences of that group when it has to make those quick decisions that do not include the human in the loop because they have to be made quickly.
ELEONORE FOURNIER-TOMBS: There is certainly a lot more to discuss. Thank you, everyone, for being here. It was a pleasure to speak to you both and hear your interesting questions. We will be around afterward for the reception if you have more questions. Thank you to Carnegie Council also.
Ressources
«Une zone grise éthique : les agents IA dans les délibérations politiques », Éléonore Fournier-Tombs, Carnegie Council, 13 novembre 2024
«Scénarios prévisionnels liés à l'utilisation de l'IA dans le domaine de la diplomatie », Carnegie Ethics Accelerator, Carnegie Council, 15 mai 2024
«La géopolitique à l'ère de l'IA générale », Values & Interests, Carnegie Council, 25 novembre 2025
«Les compromis de l'IA en diplomatie », Carnegie Council, 12 février 2024
Questions de discussion
- Quelles devraient être les premières mesures à prendre pour mettre en place un cadre crédible et inclusif de coopération en matière d'IA entre les pouvoirs publics, la société civile et le secteur privé ?
- Comment les dirigeants de la prochaine génération devraient-ils utiliser de manière éthique les outils d'IA susceptibles de renforcer la transparence ?
- Qui devrait définir les principes éthiques régissant l'IA dans les processus délibératifs : les institutions qui la déploient, les communautés qui y participent, ou une toute autre entité ?
- Comme l'ont souligné les intervenants, la gouvernance de l'IA doit placer l'action humaine au cœur de ses préoccupations. Quels exemples pouvons-nous tirer – dans les domaines du contrôle des armements, de la gouvernance de l'Internet ou de la diplomatie climatique – pour garantir que cela reste la priorité ?
- Comment s'assurer que les personnes qui utilisent des agents IA dans le cadre de délibérations internationales possèdent les compétences numériques nécessaires pour comprendre les avantages et les inconvénients liés à l'utilisation de ces technologies ?
- Comment les institutions devraient-elles concilier les principes de protection de la vie privée et de sécurité, et donner la priorité aux droits des personnes lorsqu'elles décident d'utiliser ou non des outils d'IA ?
Carnegie Council for Ethics in International Affairs est un organisme indépendant et non partisan à but non lucratif. Les opinions exprimées dans ce panel sont celles des intervenants et ne reflètent pas nécessairement la position de Carnegie Council.
Images d'archives fournies par four_track.

