Sophia the robot, speaking at the AI for GOOD Global Summit, 2017. CREDIT: <a href="https://www.flickr.com/photos/itupictures/34328656564/">ITU Pictures</a> (<a href="https://creativecommons.org/licenses/by/2.0/">CC</a>)
Sophia the robot, speaking at the AI for GOOD Global Summit, 2017. CREDIT: ITU Pictures (CC)

Artificial Intelligence's Ethical Challenges

Feb 26, 2018

Lily Zacharias is a student at Bard College majoring in Political Studies and concentrating in Gender and Sexuality Studies. She's currently a research assistant for Professor Michelle Murray and has previously worked as a research assistant and development associate at the World Policy Institute.

ESSAY TOPIC: In your opinion, what is the greatest ethical challenge facing the world today?

The greatest ethical challenge we face has yet to be fully realized. In fact, we have only experienced a taste of what's to come and the questions that will be raised. This challenge is so great precisely because we have yet to experience it or anything comparable in full. The challenge I'm talking about is artificial intelligence (AI) and the technology that implements it, machine learning (ML). This issue has recently been ushered even further into public consciousness by Sophia, the robot developed by Hanson Robotics. Saudi Arabia recently granted Sophia citizenship, a move that brought AI and ML and the serious ethical questions that come with them into mainstream discourse. Sophia gained celebrity status and has been invited onto talk shows across the world. The most common question she gets is whether or not she thinks robots will destroy humanity. More important, though, are the less visible applications of ML. We do not have to worry about Sophia or glorified chatbox robots like her destroying the world, but we should be concerned about the myriad of new, complex and challenging ethical issues that are already being posed by AI and ML across the world.

It proves useful to think about AI through the lens of nuclear power. Nuclear power had the promising potential to provide cheap, renewable energy to the world and offset our dependence on fossil fuels. In the early years of its development, the research that would further peaceful applications in the future was deeply tied up with weapons research. Once the consequences and risks of this research were emerging, research was already moving too quickly to stop and implement safeguards or consider possible global solutions to the ethical implications of nuclear power. Either there was no time to help prepare the world or no one bothered to stop and do so. It was only after Hiroshima and Nagasaki—when the weapons already existed, were used in combat to kill thousands, when the world was already changed by the existence of this new technology—that the international community grappled with these consequences. Even then, deterrence strategies and the International Atomic Energy Agency (IAEA) were developing around the same time, both considered necessary approaches to halt the proliferation and usage of nuclear weapons technology. Deterrence strategies, however, come with their own risks and ethical problems. Aside from the death toll and financial drain of nuclear weapons, this means that peaceful usage of this technology has largely been an afterthought that receives relatively little funding.

A comparison can be drawn with the development of artificial intelligence and machine learning technology. Similarly, this is a technology that will fundamentally change the world and the institutions that shape it. Like nuclear power's capacity to provide cheap renewable energy, ML could revolutionize medicine and pharmaceuticals. For example, Google developed a machine learning algorithm that diagnosed cancerous tumors on mammograms more accurately than doctors by 16 percent. Like nuclear weapons, AI research and development is going to be shaped by state interests. Foreign Affairs recently published an article titled, "The Next Space Race is Artificial Intelligence." Authors Allen and Husain are correct that this is, and will likely continue to be, the state of AI, but they're wrong that we need to continue with this competitive attitude and pour money into this field without thinking twice.

States will be more interested in investing in other aspects of this technology that will unfortunately be more ethically challenging and dangerous. There are three major areas states are interested in using AI and ML: judicial systems (both policing and court systems), intelligence, and military. By definition, the machines develop their own models on which to act or make assessments. This means transparency is a serious concern as governments will only be capable of justifying decisions made using the recommendations of these algorithms up to a certain point. For democratic countries, this could diminish the importance of transparency. For reformers or activists working towards democracy, this could give those in power the ability to deem calls for transparency outdated. In states like China where things like transparency aren't even valued in name, AI is already being implemented in policing apparatuses. Facial-recognition technology is paired with artificial intelligence to more accurately measure and characterize facial features, allowing surveillance cameras to identify jaywalkers. China already has plans to implement AI to predict terrorism and social unrest. Different countries, however, define these terms very differently. Though specific definitions of terrorism and social unrest may be unethical with or without AI and ML, AI and ML will allow states to more efficiently act on these definitions regardless of their ethical standing. In the United States, courts are moving towards using risk assessment algorithms to predict risk of criminality. In May, ProPublica published an article about Northpointe's risk assessment algorithms used to generate a report with sentencing recommendations that is presented to a judge. Because we have yet to figure out how to eliminate human bias from these kinds of programs and the data sets will reflect that bias, many of these programs replicate racial biases that are already present in our criminal justice systems. The lack of regulation of this kind of technology on national or global scales means that nations will continue to equip police forces and criminal justice systems with AI and ML fueled technology without properly safeguarding against ethical issues.

Like the situation with nuclear power, the world will likely only confront the ethical implications of AI and ML technology after it has been weaponized and militarized. The appeal is understandable: fewer soldiers are required (meaning fewer casualties) while efficiency is increased (cost decreases in the long run) and wars are more easily won. Some nations like the U.S. already have policies against fully autonomous weapons, but that doesn't eliminate the risk. Russian President Putin said, "Artificial intelligence is the future. Whoever becomes the leader in this sphere will become the ruler of the world." Russia has already invested in the development of an artificial intelligence-powered missile. While it is unlikely to happen anytime soon, the fear is that a nation would allow a weapon like this both to designate something as a target and to fire at it without requiring human approval. By the time we get to the point that something like the bombings of Hiroshima and Nagasaki happens and demands action, it will already be too late because much of the smaller-scale damage—targeting and jailing of political dissidents, loans being systemically denied to marginalized groups because algorithms learned from data-sets that reflect bias, labor markets being radically shaken without proper social security nets in place—will not only already be done, but it will be done by technology we have thoroughly integrated into our lives to the point of dependence.

If we are to overcome the challenges posed by machine learning technology, global interventions into the current research and investment landscape must be made in combination with the introduction of ethical guidelines to undermine and prevent the budding arms race between the United States, China, and Russia. Right now, short of temporarily halting all public- private investments in and contracts utilizing machine learning technologies, the best chance we have of getting this problem under control before it permeates nearly every aspect of our lives is to establish an international institution akin to the IAEA to develop recommendations for applying human rights laws to machine learning technology, monitor nations infringing on human rights with machine learning technology, and ensure that research being done is ethical. The existence of an organization with a major budget dedicated to machine learning technology would push private sector initiatives towards peaceful applications instead of accepting funding from organizations like the CIA for its research. Down the road, the approach of forming an international institution dedicated to monitoring machine learning technology might lead to a treaty like the Nuclear Non-Proliferation Treaty in which states agree to only engage in certain kinds of machine learning technology research deemed peaceful by this organization and to abide by certain citizenship guidelines to prevent the kind of AI theater that Saudi Arabia has already engaged in. Historically, some kind of crisis or mass tragedy is required to set up these kinds of treaties and institutions. With ML, we have the opportunity for prevention. Getting the U.S., Russia, and China to sign onto these solutions would be another challenge given how much they stand to benefit from a world in which AI is unregulated, but it is clear that the consequences will be grave if they don't.

To access the other winning essays, click here.

You may also like

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

APR 30, 2024 Podcast

Is AI Just an Artifact? with Joanna Bryson

In this episode, host Anja Kaspersen is joined by Hertie School's Joanna Bryson to discuss the intersection of computational, cognitive, and behavioral sciences, and AI.

Ukrainian refugee center in Moldova.

JUN 8, 2022 Article

Ethics & Artificial Intelligence: Migration

With Russia's invasion of Ukraine leading to Europe's worst refugee crisis since World War II, this article from researchers Gustavo Macedo and Lutiana Barbosa details ...

Non traduit

Ce contenu n'a pas encore été traduit dans votre langue. Vous pouvez demander une traduction en cliquant sur le bouton ci-dessous.

Demande de traduction