L'IA et la guerre : Sommes-nous dans un nouveau "Horse & Tank Moment" ? avec Kenneth Payne

17 novembre 2021

Les systèmes d'IA transformeront-ils le futur champ de bataille de manière si spectaculaire qu'ils rendront obsolètes les paradigmes et doctrines existants, alimentant ainsi de nouveaux dilemmes de sécurité intenses ? Dans ce podcast "Artificial Intelligence & Equality", Anja Kaspersen, Senior Fellow, est rejointe par Kenneth Payne, lecteur au King's College de Londres et auteur de "I, Warbot : The Dawn of Artificially Intelligent Conflict", pour discuter de l'impact des systèmes d'IA sur les affaires militaires, de la nature et du caractère de la guerre, de la culture stratégique et des affaires géopolitiques.

ANJA KASPERSEN: Today I am joined by Dr. Kenneth Payne, a King's College London reader, prolific writer, and political psychologist. Kenenth's most recent book is called I, Warbot: The Dawn of Artificially Intelligent Conflict. The book provides a timely overview of the development of artificial intelligence (AI) systems as applied to war, warfighting, geopolitical defense posturing, and considers the multiple ways in which AI challenges our notions of military strategy.

Kenneth, thank you for being with us today. I am curious. What has been driving your interest in strategic studies and military doctrines and more recently the role and impact of technology and scientific breakthroughs?

KENNETH PAYNE: Thanks for having me. I am from a service family background. Both my parents were in the Royal Air Force, and that may have influenced things a little bit, but I never intended to be an academic particularly, and I never intended to be an academic thinking about war. I actually left academia in the tail end of my Ph.D. and went off and became a journalist, but it kept gnawing away at me, so I came back to it about ten years ago, and it was a good choice to come back to it.

The AI part of it is again a sort of serendipitous accident. I was thinking about strategy, which is what I was writing about, and, as you say, I was a psychologist interested in minds and strategy, how warfare and conflict has shaped human intelligence, and in turn how human intelligence shapes warfare.

ANJA KASPERSEN: Thank you for those introductory remarks, Kenneth. As you talk about in your book, AI systems, machine learning, and algorithmic technologies will likely transform the battlefield, including where that is and who is on it, so dramatically that it will render existing paradigms and doctrines obsolete. Yet for anyone who has been working in this field for a while, there is still a great deal of uncertainty about what the actual implications are and how revolutionary it will be, especially where AI systems lack functional maturity and operational competence. Are we really in a new "horse-and-tank moment" with AI, Kenneth?

KENNETH PAYNE: That is a famous phrase as well, isn't it? It points to the invention of the tank and the changes that it wrought on warfare in the first third of the 20th century.

There is always a revolution happening in strategic studies, isn't there? I guess people have always got a book to sell or a career to make, so if it is not one revolution it's another. So I am wary about getting caught up in the hype about AI, and there is a lot of hype about AI and a lot of snake oil as well. It is good to be wary, but by the same token I think there is scope for some fairly dramatic changes in warfare as a result of modern AI, but I also think there are some fairly profound continuities in the character and in the nature of warfare that will continue despite this technological change over the next period.

Is it revolutionary? If there is one sense in which it is revolutionary, it is in the psychological sense that first interested me, which is that decisions about war have always been made by human minds—or if you want to be more broad about that, by biological minds if you think that some animals can wage something like warfare—and now there is the prospect of decisions about killing and tactical activity in warfare being made by non-human minds. I think that is where things get a little bit revolutionary.

There is a famous line from Bernard Brodie, who was a nuclear strategist writing in the 1940s and 1950s. Apparently when he heard about the nuclear weapon after it was tested, he said that everything he had ever written was now redundant about strategy because this was such a revolutionary development.

It wasn't, of course. We still read Brodie profitably today. He has a lot of interesting things to tell us about warfare, and there are great continuities in warfare even with nuclear weapons, and I think that is probably still going to be the case today with the changes brought on by AI. So we should be a little bit wary about buying into the idea of everything changing abruptly. So there is that.

On national security and AI it is really interesting. The histories of AI and national security are deeply entwined from the very origins of computing in the 1940s through to the present day. There is a long and intense relationship between computer science and defense and national security organizations. Partly that is financial, but it is also because there is a mutual interest in similar sorts of challenges.

Right back in the origins of computer science two of the key tasks were decryption—could you break codes?—and the other one was fire control—could you direct artillery fire against fast-moving targets? That was a problem that occupied a lot of big names in the field. The first computer that deserves the name really I guess was instantly put to work on calculating artillery tables. So that connection goes right back to the start.

As you come forward through the Cold War period and through to the 1980s, it is an interesting story of frustrated ambition. First, the interests of computer science and the Pentagon diverged. The computer scientists are often interested in some fairly abstract things. They are interested in questions of memory, perception, and understanding how humans do it and how machines might do it, whereas the military has got a practical bent here. There is always a purpose for this technology. They need viable weapons systems and viable command-and-control systems, and those two interests don't always align. So the military is pumping in big money, and it is not always happy with the results that come out at the end.

In the 1970s it pulled a lot of that funding, and that's where the term that you mentioned, "AI winter," comes from. Abruptly there wasn't funding to the degree there had been previously for basic research, these sorts of abstract questions, but there was still funding for applied research and in particular applied research into autonomous weaponry, which made its debut in Vietnam.

It is an uneven story of spasmodic progress I guess right the way through to the present day, and the story changes in a number of respects from the 1990s onwards, and one of the big ways it changes is that the long-established relationship between defense and AI weakens a little bit with the rise of powerful and incredibly rich Internet-based corporations that we know and love today—Amazon, Facebook, Google, and so on. Suddenly the center of gravity shifts away from the Pentagon as the funder of cutting-edge AI towards another model altogether.

The sudden explosion of abundant data that that unleashes and also developments in a new AI approach—or a revival of a very old AI approach, we should say more properly, which is connectionism, artificial neural networks, deep learning, and more broadly machine learning from the 1990s onwards—suddenly shifts the center of gravity of who is funding these, but also there is a disciplinary shift within artificial intelligence research, away from some established techniques—and some longstanding careers had been built up around those techniques—to a revival of old and marginalized techniques for a new era.

The Pentagon funded some of that. It had what it called a "Strategic Computing Initiative" in the 1980s, and it made some interesting developments in autonomous vehicles and in battlefield systems management software. Another expert system it developed was voice recognition for pilots to use in a cockpit so that the aircraft could respond to their voice input commands.

It was a little bit of a cul-de-sac. There were still constraints imposed by the amount of computing that was available. Computer systems were less powerful than they are now, but more fundamentally the expert systems themselves, no matter how far you polished them, there were limits to how flexible the intelligence that resulted was.

ANJA KASPERSEN: Building on that, in your book you speak about the need to equally push back on the narratives posited both by what you refer to as the "hyperbolic revolutionaries" as well as "jaded cynicists" currently impacting the AI public discourse—in rather unhelpful ways, if I may add.

KENNETH PAYNE: Yes. Technological determinism is a big no-no in academic circles and, for those who don't know, it refers to getting caught up by the sexy new technology and focusing on that as the main or sole independent agent of change. It is almost as if the technology arrives out of nowhere and exerts great change in society.

Actually, a lot of the literature that I read, whether it is military writing or think tank writing, is guilty to some extent of that deterministic tendency. There is too much focus on the sexy science and not enough focus on the social science, on the cultures from where the technology emerges and the way in which values and norms and local practices shape how we imagine technologies and how we invent, innovate, and instrumentalize them as they come to fruition. So that is what I was referring to then.

On the other hand, the reactionaries are the people who say: "Ah, nothing is going to change. It is all just the same as it has always been. There is nothing new here to see."

I think there is a certain military tendency—maybe it is just the people I follow on Twitter—to say: "Well, nothing is changing. War is just the same as it has always been."

If you look at base fundamental levels—war is political violence between groups of humans—that is true. But I don't think that gets you very far. You have to look at the way in which technology changes and shapes society.

There is a phrase I like from Charles Tilly, much used, perhaps the only thing poor Charles Tilly is remembered for. Everybody knows it: "War made the state, and the state made war." I like that because it captures the idea that states are a product originally or on a fundamental base level, states are all about providing security in the anarchical international arena. Think about the Tudor navy and the state bureaucracy that it called for, the creation of modern England dates to that period.

If you jam technology into Tilly—war made the state make technology, thus cheering up Bletchley Park, and then the technologically empowered state made war. That I think is a neat summation of where we are: Culture, war, and technology are kind of linked in a three-way dynamic relationship, all acting on each other. The threat to your security inspires you to invest in technologies. The way in which you do that depends on your prevailing culture, but the very act of creating those technologies changes the culture. That's what I'm trying to say.

ANJA KASPERSEN: Indeed, and we will revisit some of the points made just now, but first, AI and deep learning problem-solving capabilities are being embedded into sensitive systems as we speak, carrying the potential to alter both the character of war, which we have seen happen already, but also the nature of war as far as human decision-making is involved. What are your views?

KENNETH PAYNE: We can all think about ways in which AI will change the character of war, and naturally again we are drawn to the platform end, the new uninhabited drone or the uninhabited submersible or ship. That sort of sexy technology grabs all the attention.

There is more to the physical character of war than that. Of course, AI's effect is going to be on a lot of background processes, on less tangible things like data integration and processing or even on stuff that does not actually have that much to do with conflict. AI is a dual-use technology, and some of the civilian applications will be useful for the military as well—AI to manage your human resources, your recruitment, or to make sure that you are booking people in for their annual medical checkup, for example, these sorts of fairly boring and even banal processes. AI can have an impact on the physical character of warfare there.

Also on the character of warfare, AI will have an impact on other aspects too, for example: What does it take to be a tactical military leader when more fighting is done by uninhabited autonomous systems? There are fewer people on the battlefield to lead, and rather than the premium on physical courage there probably will be a premium on technological competence, so changing requirements for what sort of people you recruit and what sort of attributes you expect them to display in conflict and perhaps also more broadly than the military, a changing balance between armed forces and wider society.

Clausewitz talks about the connection between armed forces, government, and wider societies, one of his trinities, and that is certainly a trinity that AI has the potential to unbalance. With fewer people from the wider pool of civilians serving in the military, what does that tell us about wider society's attitude to the military, its attitude to risk, and so on?

Tactically I think there is a whole ton of changes that AI has the potential to bring about, but it is the change in the nature of conflict that I think is more controversial and is more hotly contested. Some people say it won't change the nature of warfare at all, but I think it will, and I think that is the really profound change.

For me, you can think about nature in two ways. You can think about it as thinking about warfare as a human phenomenon. It is violence between groups of humans. It involves emotion. It involves chance. Those are core Clausewitzian tenets. I don't think any of that is going to change. That aspect of the nature of warfare will remain the same even with AI for a long time to come. War is still going to be something that is done by humans and to humans, and that ain't gonna change.

But the aspect of the nature of warfare that I think AI will impact is this decision-making. At the moment, it is only going to change the decision making at a very tactical level—is this fighter aircraft piloted by a human or is it piloted by a machine? What are the implications for how it decides?—but gradually those sorts of really tactical activities pile up into something more significant.

The military talks about the "operational" level of war, so what happens at the operational level of war where these individual tactical contacts are knitted together? Is the operational commander going to continue to be a human, or is an autonomous system going to knit together these discrete tactical actions? Is the autonomous system going to pull forward resources from the reserve in order to overmatch its enemy? That is certainly one idea, and I think that is entirely plausible.

At the strategic level of war above that, it is different. I think there is still going to be scope for human involvement there for the duration of my lifetime because the sorts of questions that the strategic level of war raises are questions that AI systems find difficult to unpack. They are very human questions: "What are you thinking? What do you intend? How hard will you fight to hold what you already have? What do I want, even?" These are the sorts of questions that machines struggle with, and they are questions about creativity, imagination, and empathy, and there is still a role for humans to play there.

Somewhere between that strategic level and the operational level, where you have algorithms deciding how to deploy force in a particular theater, there has to be a firebreak, there has to be a point at which the humans come in and play a part in shaping warfare. It is not really clear where that is.

Overall, my answer to your question is tons of changes in the tactical aspects of warfare, in the character of warfare, how it looks in any given moment, and plenty of scope for local cultures to shape that. Every state, every group will use AI and develop AI in its own particular way. Then, on the nature of warfare those enduring tenets—it is a human activity involving violence and passion—will continue, but the decision-making side of warfare is where I think things start to get interesting.

ANJA KASPERSEN: If you are looking a few years ahead, especially given the limitations of current systems and the immaturity of these systems—and in some instances the incompetence as well—what do you see to be the real tangible impacts that may change the decision-making side of warfare?

KENNETH PAYNE: Here is where we run up against people who think that artificial general intelligence is coming sometime soon and that we are the cusp of a "singularity" or whatever it is. I am a little bit more skeptical about that, and maybe that betrays my own inclination. I have spent much more time thinking about human intelligence, human cognition, neuroscience, and human evolution than I have about AI.

Who knows, but my own take is that we are a very long way from artificial general intelligence of the sort that could have motivations of its own or that could on our behalf wield superhuman cognitive powers that might help address some of these strategic challenges.

I think it much more likely that AI at the strategic level in the next period, the next few years or a decade or so, will play a part as part of a human-machine team, and that is still really interesting. Human-machine teams are much discussed by the military, but for the most part they are thinking about tactical human-machine teams, the loyal wingman that flies alongside the human pilot.

My interest in human-machine teams is a little bit higher-level. It is about the strategic level of war. I think you can see some parallels there with the way humans and algorithms have interacted in creative activities. Centaur chess, for example, where human chess players are allowed to bounce their thoughts about possible moves through a supercomputer before actually making the move would be a good example of that.

There are a couple of examples that I mention in the book. One is a ballet choreographer who designs his ballets with input from an algorithm that studied hours of footage of human dancers and suggested moves of itself that he might not think of. Another example is a composer, a favorite musician of mine, who uses AI to improvise with. When he is playing along, the machine is improvising with him.

This sort of human-machine interaction I think is more the sort of thing that we should be thinking about at the strategic level, and it is an interesting and understudied area of strategic thinking. I think there is much more to be said on that in the future.

There are some early efforts. I included in the book a fascinating RAND war game from last year. It was a tabletop exercise, and it posited a fictitious escalation scenario in East Asia between the United States and its allies on one hand and China on the other. What was interesting about the war game was that—there were no computers involved so far as I know; it was humans playing the war game—in their instructions the players were instructed to think of themselves and the enemy as possessing strategic-level autonomous systems. The blend of human and machine that each side in this dispute had was different, and there was some uncertainty over the capabilities of the enemy's human-machine team.

The result of that, if I remember rightly from the war game, was that there was quite an escalation spiral because there was pressure on you to move first up the ladder of escalation because you couldn't help but assume that the enemy's system was more autonomous, that they were more willing to leave things to the machine to escalate. So you had to get your retaliation in first, so you jumped rapidly up this cycle of escalation.

Whether that is the way things would actually play out or not, I don't know. There is clearly much more experimental research that needs to be done. But I do think that thinking about how humans and machines interact, both in filtering intelligence information and in judging what the best possible move to do is, is really exciting territory for strategic studies.

ANJA KASPERSEN: You have mentioned Clausewitz a few times. Would you mind for the benefit of our listeners saying a few words about who Clausewitz was and his relevance for modern-day strategic studies?

KENNETH PAYNE: Clausewitz is the granddaddy of strategic studies thinking for a lot of us. My particular attraction to Clausewitz is that he is kind of a proto-psychologist. Psychology didn't exist then as a discrete academic subject—that came nearly a century later—but he himself was really interested in the workings of the mind. In all his work he is thinking about things that fascinate psychologists today about how we interact in groups and about how we process information individually.

He has this interesting concept of a "genius commander," and the distinctive feature of the genius commander is their ability to make decisions on the basis of imperfect information, even in dangerous times, chaotic times, the "fog of war," as it is sometimes known, a phrase inspired by one of Clausewitz's own phrases. The genius commander could act decisively. He wasn't paralyzed—and it was a he in those days—by the lack of information and the chaos all around them. Somehow instinctively, almost unconsciously, they knew how to make the right decision.

Clausewitz did not get into the nuts and bolts of how and why we do that because he didn't have the science that we have at our disposal now, but a lot of the ideas that he came up with are still ideas that resonate today—the role of the unconscious and the role of emotions in making decisions—and I find all that fascinating, so that's why I read him and continue to draw inspiration from him. I am not, I confess, an expert on or particularly interested in the arcane details of Napoleonic era battles that absorbed quite a lot of his time, but I am interested in what one of his biographers termed the "cognitive challenge of war," and that was certainly what he was all about.

ANJA KASPERSEN: Thank you, Kenneth. We sure need those genius commanders to face up to some of the challenges as well as the opportunities presented by the biodigital revolution and the impact on strategic stability.

In your book you state that the great new uncertainty with AI will be the wager essentially not just for defense, innovation, fighting power, and modernization but also for broader societal, industrial, and public services challenges. This uncertainty you observe is feeding an intense security dilemma and driving a proliferation of militarized AI.

KENNETH PAYNE: There is a lot of discussion today about whether there is an AI "arms race" or not. People often draw parallels to the 1950s and 1960s and the situation between the United States and the Soviet Union, where there were concerns in America about a "missile gap" and being "left behind" by the Soviet Union. This was the time of the Sputnik satellite, of course. A bunch of people in America look at the strategic competition with China today and say: "Well, this is a little bit similar to that. There is an arms race on, but this time it is AI and not some missile technology and nuclear warhead technology."

By the same token, there are some influential names in our field who don't think that "arms race" is the right way to describe it. One is Paul Scharre, the author of Army of None: Autonomous Weapons and the Future of War. [Editor's note: for more on Scharre and Army of None, check out his 2018 Carnegie Council talk.] For him it's not an arms race as such because AI is a dual-use technology. It is not a weapon, it is not a submarine or a cruise missile. It is something more akin I guess to electricity. It is a general-purpose technology. Sure, it has military applications, but it has the potential to do a lot of good in other fields as well. I guess you could say that the very same algorithm that can spot the tank in the forest, you retrain it, and it can spot a tumor on your mammogram. It is a dual-use technology.

Fine, I say. I hear you. But it is also a technology with military applications, one that has the potential to have a really significant impact on fighting power, on the military effectiveness of forces that can harness it effectively. Indeed, that is a point that Paul himself has made. Given that it can have such a profound effect on fighting power and given that there is so much uncertainty about what that effect is and who is going to be better at getting it, you have a situation that I think is ripe for security dilemmas.

Let me rephrase that a little bit. Given that, I think you have got a situation that is ripe for arms racing. There is a powerful security dilemma here. The security dilemma is that you fear what your enemy, your potential adversary, has or is acquiring, and as a result you invest in that technology. They, looking on, see that you are investing in that technology, and so the spiral goes back and forth, and before you know it you are in an arms race.

Thucydides—very much in vogue in Washington these days—in the start of his History of the Peloponnesian War says it was fear of Athens' rising power that caused Sparta to go to war, and that is where some people would say we are with China today. It is fear of China's rising power, and fear of the potential for China to harness AI technologies for national security is one aspect of that. That means America has to double-down lest it gets left behind.

I think that does capture the reality of debates in America and also in China actually. Whether it should capture that reality or not I don't know. Whether there should be an arms race is a separate issue, but I think there is one.

ANJA KASPERSEN: You provide an overview of some of the key dilemmas and challenges in this new and novel strategic landscape in your work. Can you take us through some of them that you deem particularly relevant?

KENNETH PAYNE: I guess we are talking here about the challenges of making sure your military AI does what you want it to do and what you expect it to do. I think that is the main concern of people whose focus is on AI ethics in this space, and it is a really difficult challenge because you can't, it's a different form of cognition, and you are outsourcing decision-making to it. It is kind of an electronic form of what the army calls "mission command," where you delegate decision-making autonomy to the lowest level, the level closest to the action.

At least if you are delegating authority to a local commander and a human, you can have some idea of how they are going to respond to the situations they find themselves in. We can use our powers of empathy and imagination to think, Well, how is this subordinate going to behave in a particular situation? We can probably learn how to trust people to follow through on the things that we want them to do.

Of course, they will still do things differently from how we would like them to do from time to time. That is a common problem, and social scientists talk about the "agent-principal" problem. It is inevitable. If you delegate decision-making authority to somebody else, there is a chance that they won't always faithfully follow to the letter what you want them to do. But I think that agent-principal dilemma is on steroids with AI. The uncertainties mount up.

First, there is the problem of what is called "normal accidents." If you think about Chernobyl or the Challenger space shuttle catastrophe, this is what can happen if you have complex, tightly integrated systems. They can fail, and there can be a cascade of failures that gets out of hand in ways that you didn't anticipate before it happened. No matter how many safety features you put in, there is always the prospect of it going wrong. So that is problem number one. That is a problem with any form of machine activity, though, but it is a problem with AI.

Problem two is that war is competitive, and your enemy is going to be doing their darnedest to thwart your plans, and your autonomous system is vulnerable to all sorts of countermeasures. One that I write about and you mentioned is this idea of "spoofing." AI systems that are trained to act on visual images can be fooled in unpredictable ways by electronic countermeasures. In spoofing you put an invisible-to-human-eyes screen over the image, and the computer misidentifies it with a high degree of certainty. In a famous example from one group of researchers, a picture of a turtle was identified confidently by the computer as a rifle. Well, that is no good if your system is out there looking for people with rifles and somebody spoofs it into thinking something else is there.

Your system can also be vulnerable to other forms of countermeasures, cyberwarfare, for example, especially autonomous cyberwarfare against your autonomous system. So there is a risk that the enemy gets inside your system and messes it up, and that is a risk with non-AI systems as well.

I guess one of the biggest challenges that I see for AI systems in this space is that we don't always know precisely what we want them to do. War is dynamic, so you upload your rules of engagement and your instructions for your machine, and off it goes into battle to do it for you. Meanwhile, the situation is unfolding in real time, and you are changing your mind about exactly what it is you want the machine to do. How do you get in there and update your preferences for the autonomous system? That is one problem of giving it your intentions.

Another problem is that in the act of passing on your intentions to the machine you pass on a whole set of assumptions and biases that you have that you may not be aware of. A really good example here is from outside the national security domain, where the data sets that you train an algorithm on can have an unexpected and unfortunate effect on what it does. The example I'm thinking of is of data sets used to recognize faces, and Black faces were not abundantly present in the data set that this particular algorithm was trained on, and it misidentified Black people as gorillas I think. While that is appalling, abhorrent, and no good at all, I am willing to bet it is not what the people who designed the algorithm intended to happen.

Translate that into the national security realm, and you are uploading your wishes to the machine, but in uploading it you are including a whole lot of unexamined biases and errors that you might not be aware of. So that's a risk as well.

ANJA KASPERSEN: Then there is the issue of new actors and suppliers of defense technologies to keep abreast of technological innovation.

KENNETH PAYNE: There are new non-traditional suppliers in this national security space. It is not just tried-and-tested companies like Boeing or Lockheed that are involved. There are other companies that you have never heard of before.

There are new suppliers in the mix as well, and there is also a requirement to stay at the cutting edge, to keep cycling, to keep adopting and implementing new technologies faster than you have before. If the code is what gives you the edge, you can't necessarily spend several decades refining it. You have to roll it out, and that does potentially open you up to some vulnerabilities that may not have been there before. So that is interesting.

I think one of the ways in which AI is going to change the national security space is for those countries that aren't originators of this technology. Already, if you are buying your defense equipment from the United States or one of the other leading providers, say, the United Kingdom or China, it is more of a service provision than it was before. You are buying a black box.

It is a little bit like the automobile market. When I was growing up, my dad used to love tinkering under the hood. Now I have my car on lease, I never open the bonnet, and it goes back to the supplier. I think the defense export market is going to be like that, and that creates some interesting vulnerabilities for the people who buy the kit. They don't necessarily own it and control it or potentially even understand it in the same way that they might have done previously. What's more, they are vulnerable in the sense that if it is going to need to be continually upgraded, it is only as useful or as competitive as the last upgrade, so they are really exposing themselves to the market and to the supplier in a way that they haven't done previously. So I think there is risk there for them certainly.

Just this week in the United Kingdom the intelligence agencies, Government Communications Headquarters, partnered with Amazon to provide their cloud data storage. I think most people who aren't heavily vested in security studies or in national security would have gone: "Whoa! Amazon? Really? How secure is that? You are using my bookseller to store your hush-hush information?" That is a nontraditional supplier there.

The question of accountability comes up in all sorts of areas where AI is going to be employed. It comes up in medical negligence, for example, or in autonomous transport systems, your driverless car—who is responsible when it crashes? Who is responsible when it kills somebody in an accident? Is it you because you hired it? Is it the government who provided the regulatory oversight? Is it the company that manufactured it and didn't do enough of a good job? How does the car make the ethical tradeoffs that it makes, and do you as the passenger understand these ethical tradeoffs that it has made before you set foot in the car? Because the car will have to make decisions about whether it saves you or avoids a pile-up or hits the pedestrian or drives you off the cliff to avoid the pedestrian. There are all sorts of questions of who is responsible for these decisions and how much the end user knows or thinks about it.

That applies just as equally in national security and in questions of lethal autonomous weapons system included there, as it does elsewhere, but here there is a unique difference. Unlike medicine or transportation, the goal is to take life in war, not to preserve it. You want the machine to take lives under the appropriate circumstances, of course, but it is a "lethal" weapons system. That is the point of the weapons system. So it is qualitatively different.

Who is responsible? At the moment there is command responsibility in the military where the principal is responsible for issuing lawful orders to their subordinate, and the subordinate is responsible for carrying them out, and the subordinate has a responsibility not to carry out unlawful orders, so there is a responsibility on both sides of that command equation.

How does that work with autonomous systems? The tactical commander is not an expert in deep learning systems. They haven't opened the manual and studied the algorithm in any detail that is being used. They are following the instructions on the box, getting it out of the box, and sending it off into battle. So are they are they responsible if it does something unexpected? I suspect not. I suspect the answer here lies probably more in corporate responsibility and government liability for these sorts of systems.

ANJA KASPERSEN: The decision about embedding such systems to begin with, you mean.

KENNETH PAYNE: Right. Ultimately it is a decision for society. Questions of national security are ones that society debates and discusses in large part through its elected representatives. The fact that society is disinterested or uninformed about the debate is unfortunate, but that's what the elected representatives are for, to reach a mature decision about how much they are prepared to entrust to autonomous systems.

ANJA KASPERSEN: Very interesting insights indeed.

I would like us to talk a bit about another important issue when discussing the future of war and warfare: culture, more specifically strategic culture. This is an issue near and dear to my heart, and as a student of anthropology and strategic studies myself, I find that too often the importance of culture is being glossed over when discussing the impact of AI on security, society, and people.

In your book you observe the importance of grounding any analysis of AI and AI systems in an understanding of the history and culture from which it, meaning AI and AI systems in this case, emerges and that different cultures will not only develop technologies in ways rooted in a distinct cultural pattern but also instrumentalize these same technologies in distinct ways shaped by said culture.

KENNETH PAYNE: I find it fascinating. I just wrote a piece with Theo Farrell, who you may know as one of the big thinkers of what we call "strategic culture." I guess the exciting question is to think about the way in which societies even imagine the sorts of technologies that they go on to innovate and then employ.

The challenge for people who write about strategic culture is that it is really slippery. It is really hard to get a grasp on what culture is without being reductive and engaging in cultural stereotypes. It is also difficult to get a sense of the nuance of a culture as an outsider. It is hard enough to grasp what your own culture is doing, still less to make judgments about the effect of Russian or Chinese culture on the way in which it will develop AI.

It is a tricky task, and it is complicated by secrecy—this is national security we are talking about here—and the fact that it is AI, so new and fast-moving. AI has a longish history in the United States, but for a lot of other countries they are new on the scene, in the last decade or so. So it is a new area, rapidly growing, rapidly changing, there is secrecy, there is a lot of overselling, snake oil, and hype. It is hard to reach solid judgments about the effect of culture on AI just yet. Anyway, we didn't let that stop us, and we had a few tentative thoughts about the way that might happen.

ANJA KASPERSEN: Can you share some of these tentative thoughts?

KENNETH PAYNE: I think in the context of the United States and China, that is the one everybody is interested in, and there is a big contrast there. Well, there are many big contrast there. One is the amount of time they have been involved in this sphere. The Chinese came late to AI. The Americans started thinking about it in the 1950s.

AI has grown up largely within that unique American ecosystem. It is an incredibly fertile blend of government money, venture capital money, long-established university research institutions, and start-ups that spring up in somebody's San Francisco coffee shop. It is a real fusion of ideas, and it is rooted in longstanding cultural tendencies about individualism, about nonconformism, and ideas about the impartial rule of law. These are all part of the fabric of American society one way or another, and they have all worked to shape the way in which AI has emerged.

It is a very different picture in China. China is spending a lot on AI. It has a lot of people working on AI. It has made some important developments in AI, and it has got a government that is willing to use AI in national security, including in internal security. Because it is a technologically advanced society, lots of people have computers, smartphones are online, and because it is an authoritarian government, it can use a lot of that data as part of an apparatus of social control. So it has plenty of experience of using AI.

All those things are a tremendous strength when it comes to thinking about AI as part of its armed forces, but it has got some big problems as well. They are almost a counterpoint to the intangible strengths of the American system, and they lie in that creativity, that nonconformism, and that fertile matchup of researchers and federal funding.

It is not clear to me yet that China is going to be able to compete over the longer term with the American AI institution. You can get a long way by emulating, and I think China has had some tremendous success there, but innovation is a real challenge, and the Americans have proved to be the real experts there, and that is what everybody else wants to follow, but it is not easy.

It's a mixed picture in Europe. It's on a smaller scale than in America and with less money and, as I say, probably larger, more intense societal concerns about what AI is going to be used for, including in national security.

ANJA KASPERSEN: Let's stick to the culture dimension for a bit, Kenneth. I have increasingly witnessed firsthand that the culture of the AI ecosystem is no longer as conducive of an environment for healthy scientific discourse around ethical considerations and limitations presented by these AI systems and related impacts. This is obviously very detrimental to any responsible deployment of such a system. Do you find that there is a similar pushback in the defense and military ecosystem to have an open and constructive discussion about these limitations as well as the unpreparedness in the human systems supporting their use?

KENNETH PAYNE: That is a really interesting question. In the wider AI community I see that phenomenon that you are talking about there. I just wonder if that connection is the deep learning, even machine learning, community. They were the outsiders for a long time in computer science, and they caught a lot of flak from their critics, from the established dominant paradigms in logical processing.

Now the shoe is on the other foot, it is interesting to see the criticisms coming from them. They were the marginalized outsiders. Now they have their place in the sun, and they are not shy about telling their critics what they think. I guess that's human nature as much as anything else.

Do I see that similar dynamic playing out in national security or in the military? Do you know what? In the military I find—we spoke earlier in this conversation about the reactionaries, those people who say, "There's nothing new, there's nothing under the sun"—there is quite a healthy debate about this.

In the United Kingdom at the moment, the place I know best, you have a lot of top-level enthusiasm for AI and the radical change it will bring. We have just had a turnover of the very top jobs, and all the candidates for that top job in the military made AI an important part of their pitch, so they are AI enthusiasts.

The midlevel is a mix, and I guess it includes some people who have seen waves of techno enthusiasm before for other initiatives and other concepts—an effects-based approach to operations and the information revolution in military affairs—and they have seen those buzzwords encounter reality, which for the United Kingdom, though it is fairly well resourced, money is never plentiful, and things are done on a much smaller scale than they are in the United States, so big ideas meet reality in a tough way. I guess some of the skeptics in the United Kingdom would say, "I see your vision of a 30,000-strong robotic army, and yet we can't even develop a functioning infantry fighting vehicle at the moment, so there is a big gap between what you say you're going to do and what we can actually deliver."

That's where the debate is I think at the moment, and I think it's a healthy good debate to have. It keeps the enthusiasts, the revolutionaries like me, in check.

ANJA KASPERSEN: And there is an openness around discussing the limitations of embedding such systems as well.

KENNETH PAYNE: I think most discussion of AI among the military and in wider society is about the ethics. There is no lack of attention or lack of thought given to the ethics of AI, no matter which side of the debate you are on, and I am on the side of the debate that says AI in combat is inevitable, so we are going to have to think about making it as good as we can. That is where 90 percent of the literature and the discussion focuses, endless panels discussing the ethics of autonomous weapons systems. I have kind of stayed away from that because I think it is well-covered and the bigger debate is about AI in the round. The trigger puller gets a lot of attention, but the effects of AI on warfare are broader than that albeit important issue.

ANJA KASPERSEN: Kenneth, I want to move us over to geopolitics and geostrategic affairs. What paradigm shifts will we see in your view as a result of the accelerating militarization of AI, and how will this impact on our ability to govern these problems and future technologies through current normative frameworks and also classical arms control instruments if at all?

KENNETH PAYNE: "If at all" is interesting. I think it's going to be really difficult to have an arms control agreement on AI in warfare. To start with, it's really difficult to define what you're dealing with, what's the system? "AI" is a great marketing term, but it's not a technology. It's a range of technologies and approaches that are widely varied. So what is it that we are talking about, and which part of warfighting are we talking about it being involved with?

You might say, "Well, I want to stop algorithms from making decisions about life and death, and that's what we're going to regulate via our arms control agreement." You can't really do that because you can't define which part of a system you're on about. An F-35 pilot is getting their information on what target to prosecute. Sure, they are finally pulling the trigger on the missile, but their rationality is bounded by the information that is coming to them via the intelligence fusion system aboard that aircraft, and that involves machine learning and AI, so AI is already in the decision-making loop along with humans.

I think arms control for AI is difficult for definitional reasons. It is also difficult because it is kind of easy to defect from any arms control regime. It is not like building a secret nuclear weapon, where you need a whole ton of infrastructure. It is a dual-use technology, and it is going to be around abundantly, so you are only ever potentially one upload away from turning your inhabited, human-controlled system into an autonomous system, and it is really hard for anybody watching to know that you haven't defected from that arms control regime.

If the effect of AI is as profound as I and other people think it is, there is a powerful incentive to defect, even if you say you are going to limit yourself in some way or another. So trust is in scarce supply, and I think arms control is going to be problematic. That is one part of the answer to your question.

A second part is the effect on balances of power. Some countries are going to be better than other countries at acquiring and instrumentalizing AI. Sometimes that will be the big players that we know and love today—China is involved in this field in a big way, Russia, the United Kingdom, and the United States. Some of the smaller players that are really technologically advanced, that are really good at AI, will experience an uplift in fighting power, places like Singapore, Israel, and Australia. They will do pretty well out of this.

Where countries are going to struggle, my sense is, is if they are large, manpower-intensive militaries where their military is sure it has a military utility, but it also has a social utility in the societies that are involved as well, it is an important part of the fabric of the state in those countries, where they use legacy, industrial-era military systems, even advanced, imported industrial-era military systems. Those countries are going to struggle I think to maintain their relative position in an era of AI where a lot of kit is disposable and churns over, and the cutting edge is always receding away from you at pace. They are very dissimilar countries. I guess if you could bracket them together you would say places like Nigeria, Egypt, maybe Turkey, Iraq, big industrial-era militaries, and it may prove challenging for them to modernize into an AI era.

I guess a third area where I would say geopolitics is changing is in what Clausewitz called the "grammar" of war, like how you understand what it is you're supposed to do. We have a reasonable understanding of conflicts, centuries of history, but now this new technology has arrived, and traditions and norms have the scope to change quite quickly.

Two concrete examples: What happens when somebody fishes your uninhabited submersible out of the sea and carts it off to reverse-engineer it? China did pick up an American uninhabited vehicle off its coast two or three years ago. They gave it back fairly quickly. The question is: What happens if a country doesn't give it back and takes it off to unpick it? Is that an act of war? Nobody knows because it hasn't happened yet.

Or, what about what the military calls "subthreshold" warfare, where you use an AI system as part of cyberwarfare? What is the appropriate response to that? At what point do you escalate, and to what degree do you escalate? Is violent action as provocative if it's a robot attacking and disabling a robot as it would be if it was a robot attacking a crewed system and disabling it, killing the people onboard? Smart money says no, but we have yet to see that play out in action.

Or, referring to the RAND report that I mentioned earlier, what happens if you've got combinations of humans and machines working together on operational, even gusting [phonetic] strategic decisions? How do they respond or how do they anticipate the decisions that the other side may make, given that there is an inhuman decision-making element involved? We are not quite sure how those dynamics play out, and I think that is a potentially destabilizing effect of AI. New technologies are always potentially destabilizing because they can prompt some fairly radical changes in established norms, and AI is no different in that respect, but I think nonetheless that there will be some changes in what Clausewitz called the grammar of war.

My pessimistic takeout from thinking about this question for a long time is that arms control by mutual consent is not going to happen, that people are going to employ AI in their own culturally determined ways, that some societies may be more let's say cavalier about how far they are going to entrust warfighting decisions to autonomous systems, you are going to have to compete with them, and that may drag you to be more permissive than you would otherwise like to be. That is a pessimistic bottom line, but the ray of sunshine I guess is that the societies to which you and I belong currently have the edge, and this technology emerged from those societies thanks to its particular attributes, its particular cultural flavor, if you like, that produced and carries on producing innovative, cutting-edge AI. That's a good position to be in.

What we have to do in those societies is make sure that our own norms are stamped on how we use AI. Even if those norms change as a result of the AI, we have to strive as far as we're able to make sure that our AI is verified as best as we're able and can give us a coherent account of why it acted the way it did, and, insofar as we're able, does align its activities with the objectives that we set it so that it keeps checking back in with us if necessary. That is the challenge for us. Notwithstanding the security dilemma and the pressures of our adversaries being more cavalier about AI, we have got to remain in control of our AI, make sure it does the things that we want it to do, and take whatever comfort we can from the fact that ours are the societies that, so far at least, have innovated this cutting-edge technology.

ANJA KASPERSEN: Building on what you just said, what are your views on the impact of AI and AI systems on nuclear security deterrents and stability?

KENNETH PAYNE: I wish I knew. To get a more granular understanding of where the technology is in relation to this question—the question boils down to my country of all: Can you find the boomer? Can you find the SSBN, the submarine with the nuclear missiles on it? At the moment, the safest place to keep your nuclear deterrent has been under the icecap on a nuclear submarine that is very quiet and hard to find, and that is what we do in this country. That is our sole deterrent, this continuous at-sea patrol.

If that is your only deterrent—and it is a pretty minimal deterrent; we don't have very many missiles or warheads relative to some of the other nuclear powers—it becomes quite a pressing question: Can you find the submarine? At the moment, the answer to that question lies on the other side of some pretty severe security clearances, so I can't give you a definitive answer.

All I would say is that AI's unique selling proposition is as a pattern recognizer that finds meaningful patterns in mountains of obscure, noisy data. Knowing that and knowing about the increasing sophistication of sensors, I would be concerned. If I was planning Britain's long-term deterrent, I would be thinking about establishing another leg of that deterrent—air-launched missiles, ground-launched missiles, something else—to complement the at-sea patrol.

For America, not a problem. It is not conceivable to me—China recently tested a hypersonic glide vehicle that got everybody excited in Washington, but America has a nuclear triad in great abundance—that anybody could mount a credible first strike against American nuclear missiles and hope to get away with it without retaliation. I don't see that situation changing anytime soon.

But for a smaller nuclear power like the United Kingdom there is cause to be concerned, and for an even smaller nuclear power like North Korea, yes, it is a concern. You have a very small deterrent resting on a few missiles and a few warheads, and potentially AI can find, fix, and kill them.

ANJA KASPERSEN: You think that the concept of deterrence applies with new and novel technologies?

KENNETH PAYNE: I think the concept of nuclear deterrence is robustly established and will continue. I think the problem of deterrence with new technologies is that it is so in flux and unknowable—and that was what I was saying about Clausewitz and the grammar of war. Deterrence rests on an understanding by your adversary of what you mean and your understanding of what they'll do. I think both those things are complicated when you've got rapid change, and AI is an example of rapid change in strategic affairs. So, if it is unclear about what your adversary intends and how they will respond to your action, then I think deterrence is problematic.

ANJA KASPERSEN: Moving to the important issue of leadership, faced with these new and potentially destabilizing uncertainties and security dilemmas that you write about, it is urgent that we invest in talent and leadership. What does leadership in the biodigital age and more specifically military leadership look like in your view, and how do we curate good leaders, "genius commanders," to borrow Clausewitz's term, to operate in these oftentimes novel contexts and environments?

KENNETH PAYNE: I think there are many potential changes to established ideas about military leadership, one of which is that militaries grow their leaders, so you join the military at a young age, you are steeped in its culture, and you rise through the ranks—there are like 19 ranks in the British military—and eventually you reach the strategic-level command, but the vast majority of your career will have been spent in thinking about tactical and operational questions, not strategy, or you can think about it, but you won't be responsible for it until those last few years.

So if you are imagining a military that is smaller and more autonomous, the question is: Where are your leaders coming from? Do you need to grow them from the age of 18 or 21 right the way through until they become senior officers, and how?

It's not that the picture is perfect at the moment. For example, militaries, because they venerate tradition, tend to favor combat arms. If you take air forces, you are much more likely to be selected for the top job if you have flown fast fighter jets. In earlier generations in America it was if you had flown big strategic bombers. But why is superior hand-eye coordination a criteria for a strategic leadership ability and responsibility for a large complex bureaucracy? So, this issue is not perfect at the moment, and maybe technology and autonomy will change that by putting an emphasis on things like managerial competence in things like procuring complex weapons systems.

But, regardless of how you get to the top, the people at the top will still have to demonstrate I think some timeless virtues of leadership. They will have to inspire, and they will have to be able to be that genius commander that Clausewitz wrote about, capable of dealing with complexity amidst ambiguity, creative and innovative.

In particular, one trait that I would single out would be empathy. Strategy is a game about empathy. Actually, the subject of my next book is empathy. It is all about, as another strategic thinker, Sun Tzu, wrote, "Know yourself and know your enemy." That is all about empathy: Can I get inside your head and understand what you want to do, whether it is to inspire you to follow me, or, if you are my adversary, to anticipate your moves and defeat you?

ANJA KASPERSEN: For our listeners, Sun Tzu was a Chinese military general and philosopher who wrote an ancient treatise about military strategy, which has inspired both Western and Eastern militaries since.

KENNETH PAYNE: Yes. Those are the two big thinkers, aren't they, Clausewitz and Sun Tzu? What I like about Sun Tzu is that, in contrast to Clausewitz, he is very short. I also like about it that there is some fairly prosaic advice mixed in with some whimsical allusive statements, good philosophy. It gets the brain juices flowing.

ANJA KASPERSEN: Indeed it does, and we need the brain juices to flow when dealing with these issues.

Do you find, Kenneth, that military leaders are good at blending skills?

KENNETH PAYNE: Military leaders have in common that they have all been in the military game for a long time by the time they reach the top, 20 years, and that must have an effect on them, but the military is a band of tribes. There are different cultures within the military. It definitely does have an effect on them, and I saw it with my own father and I see it day to day with the students and colleagues that I work with, but that's the end of it. I think there is so much diversity in the armed forces, particularly today in the United Kingdom, that it is hard to make a reductive statement that the military thinks this or the military thinks that. There are good and bad leaders at all levels of all organizations, and the military is no exception.

ANJA KASPERSEN: Riffing off that, you say in your book that an AI system can be trained to be tactically brilliant in ways that human commanders would find it hard to compete with. However—and I believe this is the main thrust of your book—an AI system will never become nor can it be a true strategist. What do you mean by that?

KENNETH PAYNE: If you haven't seen, I really recommend watching AlphaGo, the documentary about DeepMind's algorithm that beat the world champion Go player, Lee Sedol. There is a moment where the algorithm makes a move that Sedol hadn't expected and indeed all the expert watchers looking on hadn't expected. There were gasps, and people after the computer made this move attributed it to the machine's uncanny intuition, its genius intuition.

You know and I know that that is not how algorithms decide. They tree-search ahead, they heuristically narrow the domain that they're going to tree-search, and then it is down to the brute force computing abilities of the algorithm to work out what the most valuable move is, and may the computer that could look furthest into the game win.

Anyway, Sedol was stunned by the move. He left the table and went on the roof and had a cigarette break. He came back, and he was thrashed by the machine. It was a sobering moment, and that documentary really captured it.

That is what we are talking about here, the tactical brilliance of the machine. Go is what the computer sciences call a "toy universe." It is a big old universe, but it strips away much of the messy complexity of reality. It's about tactics in the same way that flying a fighter aircraft is about tactics: If I move here, you move there; now I'm moving here, now you're moving there. Those are control problems, search problems, that AI can be good at. It is a blisteringly fast and incredibly accurate trigger puller.

But that's not all that war is. War is about thinking about less tangible questions: What do we want to do? What are our goals as a society? What will it take to persuade you to back down in this moment? Those are harder questions and harder judgments to reach. I think that is what I was driving at there with the thought that AI is a brilliant tactician but a bit of a duffer when it comes to strategy, for now, at least.

ANJA KASPERSEN: Well, a good strategist would try to think about how we can avoid war as well, and AI, an algorithm, is basically a way of trying to optimize an outcome.

KENNETH PAYNE: Right. Another way of thinking about that is that AI is perhaps less adept at dealing with ambiguity and shades of gray. As you say, whatever the problem is that you give it, it will try to optimize it, so you better be sure you give it the right problem.

ANJA KASPERSEN: I think Christopher Coker said it becomes an extension of our own desires, of human desires, and that's why there is a problem.

KENNETH PAYNE: Right. Especially because some of that human desire is muddled or lost in translation.

ANJA KASPERSEN: Talking about leadership qualities, Kenneth, I was a little amused that in one of your recent articles addressing how deep learning and autonomous systems may feature in warfare in the future, you refer to the blockbuster movie Edge of Tomorrow. For those of our listeners who haven't seen it, it features Tom Cruise and Emily Blunt with Tom Cruise essentially leading a somewhat ragtag platoon to victory over an alien onslaught, as you do, via conventional, very human leadership attributes such as inspiration, courage, physical prowess, a bit of humor in tight spots, and being self-aware and authentic.

KENNETH PAYNE: First, it's not an AI movie, but it has made me think more about AI in warfare than any of the straightforward AI movies that I mention in the book, and I do love a good AI movie. But it captures something, doesn't it, about the dilemmas. The aliens are like my vision for AI warfare in the future: They're a connected, blisteringly fast, highly lethal, autonomous system. It's like the aliens operate as a system, almost like an ant swarm, and they regularly defeat Tom Cruise's human version of warfighting, which is inspirational leadership, courage under fire, high martial skills—Emily Blunt is incredible in that movie.

Those human attributes mean that Tom Cruise and the gang go down to defeat time and time again. They only end up defeating the AI system because—it's the sci-fi version of Groundhog Day; when he dies, they go back to day one and they go again. What is that? I thought, when I was watching it. That's deep learning. That's what you do. You train the system on the data, and it's rubbish, it's rubbish, it's rubbish, it's rubbish, but gradually it is getting better and better and better and better, and in the end your supervised learning produces a system that wins and prevails, and that is Tom Cruise in that movie.

ANJA KASPERSEN: Thank you, Kenneth, for those insights into Hollywood productions and their impact on how we understand AI in warfare.

I understand that your next project, if we can conclude on that point, is around empathy, which you just referred to, and the Kennedy era. Can you say something more about that and how it relates back to the technology discussion we just had, if at all?

KENNETH PAYNE: Thank you. It does relate back to the technology discussion we just had. The big difficulty for AI, as I said, is mindreading, a skill that we humans have. It is pretty patchy, but we are reasonably good at intuiting what other people are thinking and what is going on in their minds, and empathy is a large part of that. AI's strategic weakness is that it is not very good at it.

But somebody who was really good at it is one of the case studies in the book, President Kennedy. He is a favorite case study of mine, and he in his standoff with Khrushchev throughout his presidency actually was quite explicit about the need to do this, to put yourself in Khrushchev's mind, understand his position, and relate to him on a personal level and an emotional level as well as on a cold and calculating level.

The record is mixed, but he had some successes in doing that, especially during the famous Cuban Missile Crisis, where he instructed all his aides: "We have to do this; we have to not put him in a corner from which he can't get out, and we have to think about what's going on inside his head."

The opposite side of that is Curtis LeMay, who was chief of staff of the U.S. Air Force at the time. LeMay, when he eventually got to see Kennedy, because Kennedy didn't particularly want to see him, said, "We've got to invade Cuba." We know now if they had invaded Cuba it would have been the Third World War. The Soviets had tactical nuclear weapons. They would probably have used them, and away we would have gone.

Kennedy said to LeMay, "What will he do if we do that?" That is a fairly basic step of strategy, and LeMay fell at the first hurdle.

He said, "He won't do anything."

Kennedy was brought up short, and he said, "Sorry, what?"

"He won't do anything. He will just take it on the chin."

That was more or less the end of the discussion with LeMay and his input to strategy. Kennedy later told his friend Ben Bradlee of The Washington Post that his number-one piece of advice to his successors was not to listen to people in uniform just because they happen to be wearing a uniform, which is also entertaining.

So that's the next book.

ANJA KASPERSEN: Thank you for sharing that with us.

KENNETH PAYNE: My pleasure.

ANJA KASPERSEN: Thank you so much, Kenneth, for taking the time to be with us to share your expertise and insights. Thank you to all of our listeners and a special thanks to the team at the Carnegie Council for Ethics in International Affairs for hosting and producing this podcast. My name is Anja Kaspersen, and I hope we earned the privilege of your time. Thank you.

Vous pouvez aussi aimer

30 AVRIL 2024 - Podcast

L'IA est-elle un simple artefact ? avec Joanna Bryson

Dans cet épisode, l'animatrice Anja Kaspersen est rejointe par Joanna Bryson de l'école Hertie pour discuter de l'intersection des sciences informatiques, cognitives et comportementales et de l'IA.

7 FÉVRIER 2023 - Podcast

Gouvernance technologique et rôle du multilatéralisme, avec Amandeep Singh Gill

Dans ce podcast de l'AIEI, Wendell Wallach et Anja Kaspersen sont rejoints par l'ambassadeur Amandeep Singh Gill, envoyé du secrétaire général des Nations unies, M. Guterres, pour la technologie. Ils discutent de quelques ...

5 JAN, 2022 - Podcast

"Ce n'était pas mon intention" : Revoir l'éthique à l'ère de l'information, avec Shannon Vallor

Dans cet épisode du podcast "Artificial Intelligence & Equality", Wendell Wallach, Senior Fellow, s'entretient avec le professeur Shannon Vallor sur la manière de repenser l'éthique ...