L'IA dans l'Arctique : Opportunités futures et préoccupations éthiques, avec Fritz Allhoff

13 novembre 2019

Comment l'intelligence artificielle peut-elle améliorer la sécurité alimentaire, la médecine et les infrastructures dans les communautés arctiques ? Quels sont les défis logistiques, éthiques et de gouvernance ? Fritz Allhoff, professeur à l'université Western Michigan, évoque l'avenir de la technologie dans cet environnement extrême, rendu plus accessible par le changement climatique. Il nous fait également part de ses réflexions sur certaines questions philosophiques en suspens concernant l'IA.

ALEX WOODSON: Welcome to Global Ethics Weekly. I'm Alex Woodson from Carnegie Council in New York City.

This week I'm speaking with Fritz Allhoff, professor in the Department of Philosophy at Western Michigan University.

Along with Patrick Lin, a professor in the philosophy department at Cal Poly, San Luis Obispo, Fritz is the author of "Arctic 2.0: How Artificial Intelligence Can Develop a Frontier." This appeared as part of roundtable in the summer 2019 issue of Carnegie Council's quarterly journal Ethics & International Affairs.

As its title indicates, the article takes a look at how AI could be used in extreme far-Northern environments, in places like Alaska, Iceland, and Northern Canada. We look at several specific examples and also some concerns about ethics and governance.

As you'll hear, along with being a philosophy professor, Fritz has a law degree and was a clerk on the Alaska Supreme Court. And he has also been a fellow at Stanford Law School's Center for Law and the Biosciences. So he is extremely well-qualified to speak about all of these issues.

I also spoke with Harvard's Mathias Risse and Heather Roff, from Johns Hopkins Applied Physics Lab, about their AI roundtable articles. Both focus a bit more on the philosophical side of the discussion. Towards the end of my talk with Fritz, we get into some of these issues and reference those two podcasts. So I strongly urge you to listen to those podcasts or read the transcripts before or after this podcast.

You can find those at CarnegieCouncil.org. And for more from the journal, you can go to EthicsAndInternationalAffairs.org. Special thanks to journal editor Adam Read-Brown for helping to set up all three of those podcasts.

For now, calling in from Kalamazoo, Michigan, here's my talk with Fritz Allhoff.

Fritz, thank you very much for talking with me today. I'm looking forward to this.

FRITZ ALLHOFF: Thanks for the invitation. I appreciate it.

ALEX WOODSON: Of course.

Getting to your article, "Arctic 2.0: How Artificial Intelligence Can Develop a Frontier," what got you interested in looking at the Arctic specifically when it comes to AI?

FRITZ ALLHOFF: The longer story is that after law school I was a law clerk for the Alaska Supreme Court, so I got to spend a year in Alaska looking at a bunch of legal stuff and also spending time in the region and having a bunch of really cool experiences up there. It started more as a personal thing than as an intellectual one, but there were a lot of projects that were really fun when I was living up there. My co-author Pat and I both got Fulbrights to go to Iceland also, so we spent two summers ago in Iceland.

I think part of it has been organic development; these are the opportunities and projects that we had, and we started thinking about these sorts of things. Certainly the travel has been great. We've loved the opportunity to spend time in Alaska and Iceland and Norway for conferences and stuff. One of the big attractions to Arctic studies is there are a lot of really neat places to go visit.

ALEX WOODSON: Definitely.

It's probably a fairly obvious answer, but what makes the Arctic such an interesting place to have all these applications of AI that you talk about in your article? What differentiates it in terms of the technology?

FRITZ ALLHOFF: A couple of things. One is that the Arctic is changing so quickly because of climate change. You've got eight Arctic countries that have some sort of sovereign territory in the Arctic. They've got to work together and figure out what Arctic governance is going to look like.

The big difference between the North Pole and the South Pole is that the North Pole's all water and the South Pole's all land. No one's sitting up there on the top of the world on the metaphorical North Pole. You've got eight countries that border it all through water; some [don't have Arctic Ocean coastlines], like Sweden, but have property above the Arctic Circle.

It raises all sorts of interesting governance questions, questions in political science, and questions in philosophy. Effectively, how do we exert control over this big area of water that historically has been nothing but ice? Of course, through climate change and as stuff melts and as shipping routes start to expand there's tons of interest in the Arctic, obviously from the Arctic countries but also including places in Asia that want access to the shipping lanes.

It's a new area to think about from some of these technology perspectives. There hasn't been a ton of work on that so far, and so I think for us the excitement was we had a bit of background in the Arctic from some of these other biographical things that I mentioned, and then a research program for 10-15 years for Pat and I has been to look at technology ethics. We were just sitting and said, "What are the technology ethics questions in the Arctic? How do those two sorts of things fit together?" So that's where this paper came from.

ALEX WOODSON: You mentioned ethics and governance, two things that we're definitely going to get to in a little bit.

Before we talk about that, what specifically have you found? What are some specific applications of AI in the Arctic? You mention a few in the article that are very interesting.

FRITZ ALLHOFF: In the article we consider two broad kinds of applications. One, we talk about human security, and on the other side we talk about infrastructure, and a lot of infrastructure is going to underwrite human security, but we can talk about them separately.

Within security studies people talk about hard security, which is generally like military, as against soft security. Soft security is things like keeping people safe, food—where does the food come from, environment—what are the environmental challenges? If you have volcanoes, those would undermine human security; what do the economics looks like? Iceland had a big bank collapse. So on the soft security side we can think about a lot of stuff that people might not think about from a security side if they're used to thinking more about military applications.

One big challenge in the Arctic is where does the food come from? Iceland, for example, imports half of its food. You can imagine that it would be very easy to disrupt that sort of supply chain when you're so reliant on food being shipped into your country.

For example, they had a big volcano eruption that closed airports all across Europe. If you're sitting in Iceland and you've got volcanic dust in the air, planes can't land, and most of your food has to be imported in the first place, you could start to see really quickly that you could have challenges on the human security side: What are people supposed to eat? A lot of that volcanic ash got consumed by the livestock, who ended up dying for all sorts of reasons, and so the food supply is fragile in those sorts of environments.

In rural Alaska, a quart of orange juice would be $20, a tomato would be $5, even if there are any, if they're not sold out, if the freight planes can get in that week. So that's a big challenge.

Turning to AI is a way to look at it. You can think about the ways in which AI might be able to optimize supply chains, it might be able to minimize having to reorder food. Maybe there are smart tactics that shops can deploy such that some delivery system, say, Amazon, just knows in real time what food are we low on, and how can we deploy smart AI to keep those shops filled? Maybe the AI can even try to optimize around weather patterns, temperatures, annual data that has been around for a long time. Maybe AI can smooth out food delivery to some of these rural locations.

Also, search and rescue is another really big one, particularly as the Arctic has gotten really sexy with all the cruise ships and everything going up there and a lot of bush planes—it's more popular to fly around rural Alaska, rural Canada. Well, people can get lost and stuck. When you're off the road system it's not really easy to get ambulances in there or whatever.

One big application of AI would be to have some sort of network of unmanned drones that can survey the area and quickly identify if anyone has gotten lost or stuck and help find those people through, say, facial recognition, and be able to get that information back to wherever the nearest search-and-rescue group might be, which might not be within 500 or even 1,000 miles at some stages. If those areas can be surveilled by unmanned drones, that could actually make a difference in terms of saving people's lives.

ALEX WOODSON: I know infrastructure is also something that's written about in the article. I thought those were some really interesting applications as well.

FRITZ ALLHOFF: On the infrastructure side, we're thinking more about—imagine that you need WiFi, for example, through a big region. It could be really hard to lay a fiber-optic cable, but you might be able to have things like smart buoys that float around in the water, talk to each other, and form over thousands of miles a large WiFi network. The AI could even help decide where they need to be positioned in regard to where the wind is, where the sun is, whatever the other atmospheric conditions are that could affect broadcast rates. Maybe these buoys can move around in the ocean and help give us better Internet infrastructure in these sorts of rural areas.

There are a lot of minerals up in the Arctic, there's a lot of coal up in the Arctic, there's a lot of oil up in the Arctic. Those are hard places to access. Again, there are not a lot of safety resources, so anything that AI can do, even inside those mines, in order to measure the minerals that are available, measure any noxious fumes that are building up in the mines, or anything else, any way we can get AI to help keep people safe when they're more isolated would also be a strong application.

ALEX WOODSON: Are some of these applications that you've been speaking about in use already in Arctic communities?

FRITZ ALLHOFF: Not yet. I think we're getting close. A lot of the rural villages are wired for Internet, not through the smart buoys like I talked about.

For example, my field work is in rural Alaska, and there's an Alaskan telecom company, GCI, that has actually wired all the villages, so you've got tiny villages of 200–300 indigenous people living in rural Alaska, and they've got Internet. That's cool. That's a big win right there.

There are airlines that service those communities, so in terms of infrastructure I think even some of these places in rural Alaska are starting to feel more accessible.

Medicine is a really big deal because there aren't medical facilities in a lot of these really Northern communities. Potentially that's another application with AI or at least telemedicine. If there aren't physicians in these villages, the way it works right how is there tends to be itinerant care—there might be clinicians who spend one week in each village each year or something, so you save up all of your questions for that one week. But if we had telemedicine, then there could be real-time consultations with clinicians. In Alaska, if you're in a rural village, you could have a real-time consult with a clinician down in Anchorage or whatever. That potentially could really help health outcomes.

In the more distant future, we can think about health care robotics where you could have robots even out in villages that might be able to perform simple surgeries like appendectomies. Maybe they would be guided by clinicians in a hospital in Anchorage using a joystick to control the surgery, or maybe it would just be full AI; maybe the robot could just learn how to do the surgery.

I think those are more future applications, but in terms of health care delivery for rural communities you can see a lot of promise there.

ALEX WOODSON: Most of these sound very beneficial to people living in these communities, but what are some of the ethical concerns that you have about using AI in these environments?

FRITZ ALLHOFF: One is going to be cost. Just to stay with the medical AI, those devices would be about $1 million each. When you've got a community of 200–300 people living in rural Alaska, for example, is $1 million a good expense in order to get this medical robot in there or get this telemedicine unit in there? It might be, it might not be. I'm not sure.

There has been a lot of coverage on this in The New York Times and other places. It's very expensive to maintain those rural communities. Every one has a school. Those cost about $5 million to put in. They've got plumbing; that could be $5–10 million to put in. You do have ethical questions I think in maintaining these traditional communities that have been around for tens of thousands of years. It can get very expensive.

Some of them are under existential threat from rising sea levels. In other words, they're going to be underwater soon, within a couple of decades, let's say. If that's what it looks like, then is it worth pumping a bunch of money into them, or should the communities be relocated? That's very controversial, too, for lots of reasons. These are communities people have lived in for a long time, so relocation certainly doesn't appeal to a lot of them. I just think there are a lot of ethical questions in that regard.

ALEX WOODSON: Going back to the infrastructure, you and Patrick Lin draw a comparison to what China's doing in the South China Sea, building artificial islands. There are a lot of concerns about sovereignty when it comes to China doing that in the South China Sea. I could imagine there could be certain concerns about some of these infrastructure projects as well.

FRITZ ALLHOFF: Yes. That's a great point. The thought is that if you can do this geoengineering, you can try to leverage what's called the "exclusive economic zone." You get [200 nautical] miles off of your shore under international law, so if you can start making more islands, then start tacking on your [200-nautical]-mile-circles in each direction, you could see how you could start to acquire pretty good territory.

That's a lot of what's going on up in the Arctic right now. There are a bunch of competing claims for the same territory, and a lot of those are now getting adjudicated by the United Nations.

But yes, building these sandbars in the South China Sea or even seasteading more generally, where you just claim some land out in international waters, are going to be big issues for sovereignty. And we have treaties that are supposed to deal with all that stuff, but these things are moving pretty quickly right now, and sometimes it's hard for the treaties to keep up.

ALEX WOODSON: One thing that's mentioned in the article is the UN Convention on the Law of the Sea. You just mentioned that treaties can't really keep up, so does there need to be some new Convention—maybe not specifically for AI in the Arctic—to encompass something like the infrastructure project you're talking about, the South China Sea issues? Does there need to be some kind of new governance mechanism that's not there yet?

FRITZ ALLHOFF: The UN Convention on the Law of the Sea is in an interesting place right now. What they requested was that all of the countries that want to put territorial claims in the Arctic to submit scientific scans of things like their continental shelf and the sea beds to try to figure out geologically exactly where which things are. The important part is that it's not supposed to be a political process, it's supposed to be a technical process to image the sea and stuff.

Countries had 10 years to put those forth, and now they're being reviewed, and it's going to be a while until the United Nations says what's what, but that process at least has been ongoing. I'm sure it's going to be controversial and that there will be more stuff to say, but I think people are loosely happy right now with the ability of that treaty to eventually figure out and say informative things about what's going on up there.

ALEX WOODSON: Thinking about your research into AI, can this be applicable to other extreme environments as well? Have you looked into deserts, rainforests, space, and things like that to use these new types of technologies as well?

FRITZ ALLHOFF: My research group, including Pat, has typically wanted to look at what we call "frontier ethics," so these new applications typically enabled by technology. Our big funded projects by the National Science Foundation so far have been on human enhancement—what are ways we can change humans such that they're still human or not human or whatever.

The second big one we had—which is going to sound weird—was on cyberspace. It was trying to think through ways in which the cyber realm is similar or different from what we call the kinetic realm or the physical realm. So, what's the difference between a cyberattack and firing a missile? That's an example of a non-physical space, but it still has a lot of the same questions: How do we think of the cyber frontier as being the same or different from stuff we've got?

After cyber, we switched to Arctic, and again the same sorts of questions. You've got a bunch of interests in development. What are the ways in which our existing regimes work or don't work and need to be extended or updated? And then outer space would be another one that we've started to look at a little bit. We kind of have a coherent research program, where we try to look at the next sorts of things that are coming onboard.

ALEX WOODSON: Great.

To shift this into a little bit more philosophical discussion—this is something that I've talked about in my last two AI podcasts with Mathias Risse and Heather Roff.

What are your thoughts broadly about AI being a moral agent? Is that something that we need to think about? Are we going to need to have different types of laws for AI-powered machines in the future, or do you think that they'll just be mostly machines?

FRITZ ALLHOFF: One of the philosophical questions is, imagine we have an AI and it does something evil. Do we think that the AI is morally responsible, or do we think the locus of moral blame should go somewhere else?—and the somewhere else would usually be the programmer or the design team or something like that.

My view—which is similar to Heather's—is that at least for now and probably for much of the foreseeable future we're generally going to blame the programmers or the design team. If I buy a sex robot and somehow get injured enjoying my sex robot, it's not that I sue the sex robot for injury, say, assault or negligence or whatever. I'm going to sue the company, the manufacturer that designed the sex robot. Maybe it wasn't safe for whatever reason, there could be a physical defect on it, or there could be a coding defect, maybe it didn't listen to my safe word or whatever. So when I'm interacting with the sex robot it seems that the natural place for me to put moral and legal responsibility would be on the designers.

Heather has a similar example where she talks about a Roomba, the vacuum cleaner thing. If the Roomba eats your cat, you don't send the Roomba to jail, you file a product liability claim, a tort claim, against the manufacturer or something like that.

I think that sounds right. I think when technology goes poorly we've got to apportion blame at least civilly—maybe criminally in some cases—but it seems to me that we're going to go back to the manufacturers.

When it gets more interesting—and maybe this is what Mathias has in mind—what if you have these AI consciousnesses that develop independently or maybe a programmer says "Go!" and they just go on their own, at what stage do we think the AI becomes its own locus of moral evaluation? If you watch a movie like Avengers: Age of Ultron, where you have this really advanced artificial intelligence—and no obvious programmer; it seems the intelligence has been smart enough to develop on its own and so forth—then it really does seem like you've got to do something about the AIs, but you're probably just going to turn it off. I'm still not thinking you're going to sue it for damages, I'm still not thinking you're going to put the AI in jail, whatever that would even mean. It would seem like you're just going to turn it off and treat it like a broken product.

I do have the intuition in cases like that that, yes, this is a bad AI and it's even morally a bad AI, but for me, if I think about morality as tracking things like punishment, the AI is still not appropriately viewed as an object of punishment.

ALEX WOODSON: As you said, that's probably more in line with Heather Roff's view, that AI will not be moral agents. Mathias Risse uses a term called "carbon chauvinism," which as you explained a bit is that we're carbon-based; if something is silicon-based could they develop their own kind of consciousness that we need to contend with?

When you hear Mathias's argument there—and this goes back to something that Adam Read-Brown wants me to ask, and I think it's a very interesting way to phrase it—is the basis of your disagreement with him philosophical, or does he understand the technology in a different way than you do?

FRITZ ALLHOFF: I don't know him personally. I do know Heather pretty well. My read on it would be—without knowing more of the details of his argument; I don't want to put words in his mouth—that we are disagreeing about the current state of the technology. I think maybe what he has in his mind—

He's not going to disagree with my sex robot example, he's not going to disagree with Heather's Roomba example. I think where we're seeing it differently is he's probably imagining something closer to the AI from Age of Ultron. In Heather's podcast she makes it pretty clear that AI is still at a very basic level. A lot of us watch these movies or whatever and think AI can do all of this crazy stuff—and it can win at Go and win at Jeopardy! Those are amazing accomplishments—but we don't really have to worry about the Ultron-like cases yet.

So I wonder if Mathias's analysis is presupposing much more advanced, futuristic AIs. I think Heather and I would have to rethink it if we got to those sorts of cases. I'm not sure we would necessarily disagree philosophically so much as we might just be talking about different time slices of the development of AI that maybe he's more off in the future and she and I are more grounded in the present, which isn't a criticism. Being it's his interest, it's important, and that's great.

ALEX WOODSON: It's great to have these discussions, and it's something that's very necessary as you said as the technology continues to develop.

FRITZ ALLHOFF: Yes.

Vous pouvez aussi aimer

La salle de guerre du Dr Strangelove. CREDIT : IMDB/Columbia Pictures

10 DÉCEMBRE 2024 - Article

L'éthique au cinéma : Discussion sur "Dr. Strangelove"

Cette revue explore les questions éthiques liées aux armes nucléaires et à la non-prolifération, le complexe militaro-industriel et le rôle de la satire politique dans le film "Dr. Strangelove" de Stanley Kubrick.

3 DÉCEMBRE 2024 - Article

Pauvreté des enfants et égalité des chances pour les enfants aux États-Unis

Ce projet final de la première cohorte du CEF traite des effets de la pauvreté infantile aux États-Unis et des solutions éthiques pour aider à soulager cette ...

2 DÉCEMBRE 2024 - Article

La Journée mondiale de l'éthique 2024 atteint de nouveaux sommets avec la participation de 70 pays

Le 16 octobre 2024, des centaines d'organisations et des milliers de personnes dans près de 70 pays ont célébré la 11e Journée mondiale de l'éthique.