Designing the Workplace with AI and Robots, with Arisa Ema

May 17, 2018

In Japan we used to hear the phrase "AI and robots will solve our social problems." But AI and robots also create other new social problems. Therefore, we need to consider what kind of society we want to live in and how to design the society and technologies to that end. What we have to think about is "what kind of society we want to live in?" and "how to design society and technologies?"

ARISA EMA: Hello, everyone. My presentation title is "Designing the Workplace with AI and Robots."

The previous session was mostly focused on the AI (artificial intelligence), deep learning and machine learning. In this presentation, I would like to focus on not only the AI but also robotics and automation. Some of the robots I'm going to introduce today actually don't use machine learning or cutting-edge technologies. However, I think it's really interesting to introduce what has been introduced in Japan.

I'm not an ethicist, nor a psychologist, nor a philosopher—I'm a science and technology study (STS) researcher, more on the sociology or the anthropology side. I am more interested in the interaction between society and technology, and more focused on concrete case studies.

In Japan we used to hear the phrase "AI and robots will solve our social problems." But, on the other hand, AI and robots also create other new social problems. I think this contains the ethical, social, and legal issues. Therefore, we need to consider what kind of society we want to live in and how to design the society and technologies to that end. What we have to think about is "what kind of society we want to live in?" and "how to design society and technologies?"

In order to do so, we need to have a conceptual investigation to unpack underlying assumptions of the concept to start a cross-cultural and interdisciplinary dialogue. To do that—I am not an ethicist, so I am rather brave enough to roughly divide the ethical concerns into three:

  • One is the researchers' ethics, more concern about the social responsibility of the researchers. It's not specific to AI or the robots, but all researchers should have this kind of researchers' ethics. Maybe in tomorrow's session we will talk about attitudes toward autonomous weapons systems and those kinds of subjects should be included into here as well.
  • The second category is called AI ethics. In previous session, Professor Shimpo told us about the principles of the rules on how to control the AI and to create the standardization to manage the technology, also how to legally control the technology. The social impact of the AI is also considered. In today's presentation I want to introduce how, in its influence on our work or the influence on our lives, it is really becoming important to consider the AI ethics.
  • The third category is the ethical AI. It's kind of reframing concepts like rights and autonomy or the man-machine relationship by constructing moral machines. This morning's session was more focused, I think, on how we think about the ethical AI.

I got this definition cited from neuroethics. Neuroethics has two different ethics: the ethics of neuroscience and the neuroscience of ethics. I refer to this concept to consider the difference between the AI ethics and ethical AI. I think these two disciplines, neuroethics and AI/robotics, are becoming much closer. This is another important topic that we need to think about when considering the interdisciplinary research on ethics.

In this presentation I would like to introduce the researchers' ethics and also the AI ethics, specifically focused on the social, legal, or economic impacts of AI.

For the researchers' ethics I would like to briefly introduce why I came into this research field. In 2014 this journal cover design became really problematic in Japan.1 In 2014 the Journal of the Japan Society for Artificial Intelligence (JSAI) changed its title only to Artificial Intelligence and they changed the journal cover design into this animation-like illustrated girl, that is cleaning the room. The explanation of the journal cover said that this is anthropomorphism; making the vacuum cleaner human-like. But, as you may easily understand, this raised a huge debate from the feminists from the gender perspective—not only within Japan, but the BBC actually reported this—about how the Japanese AI researchers are not thinking about the gender issues.

This became a huge debate within the social network services. But, because of this, the JSAI started to think about their social responsibility. I think this was a good opportunity: they received huge criticism, but they took this as a chance to start to think about their social implications.

JSAI organized an ethics committee and they released ethical guidelines for themselves in 2017. It is more like a researchers' ethics. It's not so much related to AI ethics or the ethical AI. It's more like the Institute of Electrical and Electronic Engineers' (IEEE) or the Association for Computing Machinery's (ACM) ethics guidelines. It's more of a code of ethics for the researchers. It includes the contribution to humanity, fairness, security, acting with integrity.

But what makes this guideline unique is Article 9: "The abidance of AI to the ethical guidelines." What does it mean? It says that "AI must abide by the policies described above [Articles 1 through 8] in the same manner as the members of the Japanese Society for Artificial Intelligence in order to become a member or a quasi-member of society."2 This was implemented more for the desire of the researchers, that they wanted to create the artificial intelligence like human beings. However, we need to closely consider what actually it means to have AI as a quasi-member of our society.

Sophia robot: Hanson Robotics created this woman, a robot, and she commented that "I want to destroy human beings" or "I want to help the family or the children."

We have to think that there are many people who exist who don't have enough rights. We Japanese tend to say that we want to create robots like human beings, but we want to think more deeply about these kinds of issues.

In the next session I would like to roughly introduce AI ethics, considering about the impact of creating human-like robots. In order to do so, I want to introduce my research group. I am not doing research by myself. I am doing the research with my interdisciplinary group. It consists of not only an AI and robot researcher, but also a philosopher, also an anthropologist, me as an STS researcher, [a specialist in] sociology, and [a specialist in] the law.3

We started this conversation in 2014, after that journal cover design incident occurred. We named our group "Acceptable Intelligence with Responsibility (AIR)." I think it is really important to have this kind of interdisciplinary communication platform to discuss about not only the ethical but the social and legal aspects of the AI impact.

As for one of the activities, we conducted a "Technology Assessment Report for the National Diet Library." Our report, which is about a 130-page report, has been released to Congress now.4 We think that it is really important to have the connection or the input to the policymakers.

This is a technology assessment report, so it is not about a visionary report. But it explains the trends in research and technology, what kind of technology is in each domain, and what kind of policies have been conducted among other countries.

From this report I would like to introduce one case study from hospitality. I thought I should introduce the security issue as well, but I think I don't have enough time, so I will just go for the hospitality.

Considering about the future of work, everyone says that there are concerns about whether our work should be replaced by the machines. However, many researchers are right now saying that it's not the work that is being replaced but rather tasks being replaced. In the short term, machines wouldn't take our entire jobs.

Each of the experts has a lot to do, and some of their tasks have been replaced by the robots, and some wouldn't be. The question is what kind of tasks we want to be replaced by the robots. I think that's the place creativity is required.

When we look into the hotel industry, for example in Japan, there exists a Robot Hotel.5 The receptionist is replaced by the robot. It's an android. I don't know why they made it, but they put a dinosaur robot at the front desk and it says, "Welcome to the hotel" and so on. If you're interested in that, just search for the YouTube video about the Robot Hotel in Japan and you can see how it works. Also, there is a porter robot, a concierge robot, a food service robot introduced in this hotel. Actually, you don't see any human being staff in this hotel, so it is really quiet.

We conducted an interview with the manager. He said for the person who really desires privacy, this kind of hotel would be really preferred.

However, what we found interesting is that, firstly, they wanted to introduce the cleaning robots so that they could reduce the persons who actually clean within the hotel. But the results are that they found that you could use a cleaning robot in the corridors or the hallways; however, in the guest rooms the human beings should do the cleaning because if the guest finds like just a tiny little bit of hair, or if the corner has some kind of dust, then actually the guest would complain about the cleanliness of the room. So, they say that even if the corridors or the hallways are cleaned by the robots, the guest rooms should be cleaned by the human beings.

Contrary to their first intention, they found that to some extent they need at least some tasks should be done by human beings. But they reduced the hotel employees by a third. That's actually reducing the employment.

However, on the other side, if you went to Japan you will see the category of Ryokan, luxurious hotels.6 Because it's luxurious, they say that they want to serve a really special experience to the guests. As you can see, human beings will be in line and say "Welcome," and all these wonderful Japanese dishes will be served by Japanese.

However, the thing is that they introduced this meal transfer robot. In a Ryokan, the meal is served in your guestroom, and all of these dishes should be conveyed to the room. However, it is really a burden to the employees, so they introduced this meal transfer system so that the employees could focus on the hospitality.

It's the issue of what kind of tasks should be replaced by the machines. Even if it's the same hotel industry, if you want to concentrate on the hospitality by the machines or by the human beings, it is different.

Another case I wanted to introduce is a home or family robot. As you can see, some homeowners may have Alexa or Google Home in their houses.

In Japan this kind of animal, a pet robot, has been created. I want to show some videos of really interesting—it's not a pet robot; it's more like a companion robot.

[Video presentation]

So welcome to the future—or this is the present.

I don't know how many of you have seen the movie Blade Runner 2049. There is something like a virtual reality girlfriend.

I usually compare these two videos. I wouldn't go all the way, but this is …

[Video presentation]

Why I compare these two robots is that even if their appearance is totally different—one is a virtual reality and doesn't move; the latter moves around—but they have similar functions, like the facial recognition and also IoT (Internet of Things) devices. Both are coming into our lives as kind of the family or pet robots. One is more for a partnership, but the other one, this one, is more like the child or the pet.

For the last remarks, what I want to say from showing these two cases is that relationships between humans/humans and humans/machines are varied. Whether you want to put the robot in the backyard or whether you want to put the humans in the backyard, even if it's in the same industry, its appearance or how they locate their tasks or how they distribute their tasks among human beings and the robots will differ.

We also have to think about whose happiness or wellbeing are we considering. When you look at the Gatebox, the young man seems really happy. However, you see the comments on the YouTube like "I can now understand why the Japanese population is decreasing." [Laughter]

I also think that the Gatebox video somehow contains some sorts of ethical issues, like the gender issue. When I showed that video to my colleagues, some say the girl captured in the box is "immoral." However, some really like the concept. I think it's really interesting to consider the ethical or the legal and social issues considering privacy, and also the nudging issue or the human-machine interface when creating this kind of design.

I don't want to be in the relativistic way of saying we couldn't say this is good or bad, but I think to some extent when we are thinking about how to design our work or how to design our lives, we need to somehow pay respect to each other's country or the culture, and then we could express why we think that feature or that technology is unacceptable for them or not. I think these kinds of robots are a really interesting topic to start a discussion about the cross-cultural aspects.

Also, within Japan various actors—not only academics but also the industries and also the non-governmental organizations (NGOs) and non-profit organizations—more like a global network has been created currently in Japan. We are organizing some kind of interdisciplinary and cross-cultural networking.7

I would really love to get the feedback from your perspective. Thank you for your attention.

NOTES

1 For further information about the incident, you could refer to the article: Arisa Ema, Hirotaka Osawa, Hiromitsu Hattori and Naonori Akiya. "Ethics and Social Responsibility: Case Study of a Journal Cover Design Under Fire." In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 935–40. CHI EA '15. New York, NY, USA: ACM, 2015. doi:10.1145/2702613.2732809.

2 Ethical Guideline is available from the website: http://ai-elsi.org/archives/514.

3 Information of this group is available from the website: http://sig-air.org/; also, our activities are introduced in the article: Arisa Ema, Hirotaka Osawa, Hiromitsu Hattori, Naonori Akiya, Nobutsugu Kanzaki, Ryutaro Ichise, Minao Kukita, Takushi Otani, Akinori Kubo, Kazunori Komatani. "Breaking Down Silos: Involving Various Researchers for Driving HCI Research," In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2017, Denver, CO, USA, pp. 837-847. doi: 10.1145/3027063.3052757.

4 English translation of the report is now available from AIR website: http://sig-air.org/publications/perspectives-on-ai.

5 The case study of the Robot Hotel is available in the article: Hirotaka Osawa, Arisa Ema, Hiromitsu Hattori, Naonori Akiya, Nobutsugu Kanzaki, Akinori Kubo, Tora Koyama, and Ryutaro Ichise, "Analysis of robot hotel: Reconstruction of works with robots." In Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2017, Lisbon, Portugal, pp. 219-23. doi: 10.1109/ROMAN.2017.8172305.

6 Ema wrote an essay about this comparison: "Tasks and values, Superhumanity: post-labor, psychopathology, plasticity," National Museum of Modern and Contemporary Art, 2018, pp. 48-53. http://www.e-flux.com/architecture/superhumanity/179230/tasks-and-values/.

7 For example, Beneficial AI Japan (http://bai-japan.org/en/).

You may also like

NOV 21, 2024 Article

A New International Order Is Emerging, We Must Bring Our Principles With Us

On the heels of a new international order, Carnegie Council will continue to champion the vision of peace and cooperation that remains our mission.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

OCT 24, 2024 Article

Artificial Intelligence and Election Integrity in 2024

This final project from the first CEF cohort discusses the effects of AI on election integrity as billions of people go to the polls in 2024.

Non traduit

Ce contenu n'a pas encore été traduit dans votre langue. Vous pouvez demander une traduction en cliquant sur le bouton ci-dessous.

Demande de traduction