Do people display different racial biases towards black robots and white robots? A new study says yes
The majority of robots are white. Do a Google image search for “robot” and see for yourself: The whiteness is overwhelming. There are some understandable reasons for this; for example, when we asked several different companies why their social home robots were white, the answer was simply because white most conveniently fits in with other home decor.
But a new study suggests that the color white can also be a social cue that results in a perception of race, especially if it’s presented in an anthropomorphic context, such as being the color of the outer shell of a humanoid robot. In addition, the same issue applies to robots that are black in color, according to the study. The findings suggest that people perceive robots with anthropomorphic features to have race, and as a result, the same race-related prejudices that humans experience extend to robots.
Christoph Bartneck, the lead author of the study and a professor at the Human Interface Technology Lab at the University of Canterbury in New Zealand, presented the results at the ACM/IEEE International Conference on Human Robot Interaction (HRI) in Chicago earlier this year.
“We hope that our study encourages robot designers to create robots that represent the diversity of their communities,” Bartneck told me. “There is no need for all robots to be white.”
Bartneck suspected the research could prove controversial, but he and his collaborators—from Guizhou University of Engineering Science, China; Monash University, Australia; and University of Bielefeld, Germany—were determined to pursue the issue. “The discussion on this topic was like walking through a minefield,” he said, adding that their paper received extensive scrutiny from reviewers, some of whom accused the authors of sensationalism.
To learn more about the project, and the controversy surrounding it, we spoke with Bartneck via email. If you’d like more details on the methods used, statistical analyses applied, and numerical results, the full paper is available for download here.
IEEE Spectrum: Why hasn’t this topic been studied before, and what made you decide to study it? Why is this an important thing to study?
Christoph Bartneck: Many engineers are busy working on implementing the basic functions of robots, such as enabling them to walk and to navigate their environment. This does occupy much of their attention and the social consequences of their work are not particularly high on their priority list. Often robots are designed from the inside out, meaning that first all the functional parts of the robots are built and tested. Only at the end some sort of cover is added. How this cover affects the human users, or more broadly, how the robot as a whole is perceived by its users is more often than not only an afterthought.
Therefore, racism has not been on the radar for almost all robot creators. The members of the Human-Robot Interaction community have worked already for many years to better understand the interaction between humans and robots and we try to inform robot creators on how to design the robots so that they integrate into our society. Racism is causing considerable damage to people and to our society as a whole. Today racism is still part of our reality and the Black Lives Matter movement demonstrates this with utmost urgency. At the same time, we are about to introduce social robots, that is, robots that are designed to interact with humans, into our society. These robots will take on the roles of caretakers, educators, and companions.
A Google image search result for “humanoid robots” shows predominantly robots with gleaming white surfaces or that have a metallic appearance. There are currently very few humanoid robots that might plausibly be identified as anything other than white or Asian. Most of the main research platforms for social robotics, including Nao, Pepper, and PR2, are stylized with white materials and are presumably white. There are some exceptions to this rule, including some of the robots produced by Hiroshi Ishiguro’s team, which are modeled on the faces of particular Japanese individuals and are thereby clearly—if they have race at all—Asian. Another exception is the Bina 48 robot that is racialized as black (although it is again worth noting that this robot was created to replicate the appearance and mannerisms of a particular individual rather than to serve a more general role).
This lack of racial diversity among social robots may be anticipated to produce all of the problematic outcomes associated with a lack of racial diversity in other fields. We judge people according to societal stereotypes that are associated with these social categories. Social stereotypes do, for example, play out at times in the form of discrimination. If robots are supposed to function as teachers, friends, or caretakers, for instance, then it will be a serious problem if all of these roles are only ever occupied by robots that are racialized as white. We hope that our study might serve as a prompt for reflection on the social and historical forces that have brought what is now quite a racially diverse community of engineers to, almost entirely and seemingly without recognizing it, design and manufacture robots that, our research suggests, are easily identified by those outside this community as being white.
What does racism mean in the context of robotics? How can a robot have a race if robots aren’t people?
A golden rule of communication theory is that you cannot not communicate. Even if the robot creators did not racialize their robot, people will still perceive it to have one. When asked directly what race the robots in our study have only 11 percent of the people selected the “Does Not Apply” option. But our implicit measures demonstrate that people do racialize the robots and that they adapt their behavior accordingly. The participants in our studies showed a racial bias towards robots.
If robots can be perceived to have a race, what are the implications for HRI?
We believe our findings make a case for more diversity in the design of social robots so that the impact of this promising technology is not blighted by a racial bias. The development of an Arabic looking robot as well as the significant tradition of designing Asian robots in Japan are encouraging steps in this direction. Especially since these robots were not intentionally designed to increase diversity, but they were the result of a natural design process.
What specific questions are you answering in this study?
Do people ascribe race to robots and if so, does the ascription of race to robots affect people’s behavior towards them? More specifically, using the shooter bias framework, such racial bias would be evidenced by participants being faster to shoot armed agents when they are black (versus white), faster to not shoot unarmed agents when they are white (versus black), and more accurate in their discernment of white (versus black) aggressors.
Results of a Google image search for the term “robot.
Can you describe the method you used to study these questions, and why you chose this particular method?
The present research examined the role of racialized robots on participants’ responses on the shooter bias task, a task widely used in social psychological intergroup research to uncover automatic prejudice towards black men relative to white men. We conducted two online experiments to replicate and extend the classic study on shooter bias towards black agents. To do so, we adapted the original research materials by Correll et al. and sought to explore the shooter bias effect in the context of social robots that were racialized either as of black or white agents.
Similar to previous work, we explored the shooter bias using different response windows and focused both on error rates and latencies as indicators of an automatic bias. In shooter bias studies, participants are put into the role of a police officer who has to decide whether to shoot or not to shoot when confronted with images in which people do either hold a gun in their hand or a benign object. The image is shown for only a split second and participants in the study do not have the option to rationalize their choices. They have to act within less than a second.
What were the results of your study?
Our study revealed that participants were quicker to shoot an armed black agent than an armed white agent, and simultaneously faster to refrain from shooting an unarmed white agent than an unarmed black agent regardless of whether it was a human or robot. These findings illustrate the shooter bias towards both human and robot agents. This bias is both a clear indication of racism towards black people, as well as the automaticity of its extension to robots racialized as black.
Were the results what you expected?
Given the novelty of our research questions we did not have a clear prediction of the results. We really did not know whether people would ascribe a race to a robot and if this would impact their behavior towards the robots. We were certainly surprised how clearly people associated a race to robots when asked directly, in particular since the “Does Not Apply” option was the first option. In studies or racism, implicit measurements are normally preferred over explicit measures since people tend to respond with socially acceptable responses. Barely anybody would admit to be a racist when asked directly while many studies using implicit measure showed that even people that do not consider themselves to be racist exhibit racial biases.
Are you concerned that asking people questions about robots in a racial context makes people more likely to ascribe race to them?
This would hold true for explicit measures. Asking what race a robot might have suggests that there is at least a possibility for a robot to have a race, no matter if there is a “Does Not Apply” option offered. The implicit measurements allow us to study racial biases without leading the participants on. During the main part of the study, race was never brought up as the topic of investigation.
Can you discuss what the limitations of your current research are? How would you improve research in this area in the future?
We may speculate that different levels of anthropomorphism might result in different outcomes. If the robot would be indistinguishable from humans then we would expect to find the same results as the original study while a far more machine-like robot might have yet-to-be-determined effects. One may also speculate about the racialization approach we used. To best replicate the original shooter bias stimuli, we opted to utilize human-calibrated racialization of the Nao robot rather than employ the Nao’s default appearance (white plastic), comparing it against the robot stylized with black materials.
It is important to note that the Nao robot did not wear any clothes while the people in the original study did. Strangely, the people in the original study did not cast a shadow. Given the powerful functions of Adobe Photoshop, we were able to include a more realistic montage of the Nao robot in the background by casting shadows. Future studies should include multiple postures of the Nao robot holding the gun and the objects.
It sounded like there was some hesitation about accepting the paper into the HRI conference. Can you elaborate on that?
The paper submitted to the HRI conference went through an unparalleled review process. Our paper was around 5,000 words and the reviews we received added up to around 6,000 words. The paper was discussed at length during the conference program committee meeting and a total of nine reviewers were asked to evaluate the study. The paper was conditionally accepted and we were assigned a dedicated editor that worked with us to address all the issues raised by all the reviewers. It pushed the authors and the editor to their personal boundaries to address all the arguments and to find appropriate responses.
The method and statistics of our paper were never in doubt. Most of the reviewers were caught up in terminology and language. We were even accused of sensationalism and tokenism. These comments are based on discussion that are on the ideological level. For example, the term “Caucasian” was considered inappropriate. From a European perspective this makes little sense. Also, the use of the term “color” was deemed inappropriate, which made it difficult to talk about the light absorbing properties of the robots’ shell. We were instructed to use the term “melanation” instead. While this might be a more scientific term, it makes it difficult to talk about the study with the general public.
Before the conference I contacted the program chairs of the conference and suggested having a panel discussion at the conference to allow for a public debate on the topic. Although initially enthusiastic the proposal was eventually turned down. I then proposed to use the presentation slot assigned to the study to include a small panel discussion. After an initial okay, and after I had solicited two experts who agreed to participate, the conference organizers forbid this panel discussion. I was instructed to present the paper without any commentary or panel discussion the day before the presentation slot.
What this shows is that our academic community is struggling with addressing controversial social issues. All attempts to have an open discussion at the conference about the results of our paper were turned down. The problem that I have with this is that it inhibits further studies in this area. Why would you expose yourself to such harsh and ideology-driven criticism? I think we need to have a supportive and encouraging culture of conducting studies of problematic topics.
What was the reaction to your presentation at HRI and afterwards?
The presentation of the study at the conference was extremely well attended and the international media widely reported on the results with the notable exception of the U.S. This is particularly problematic since our study was executed with participants only from the U.S.
How would you like to see the results of your research applied?
We hope that our study encourages robot designers to create robots that represent the diversity of their communities. There is no need for all robots to be white.
What are you working on next?
We are currently conducting a study in which we expand the gradients of the robots’ surface color to include several shades of brown. In addition, we are investigating to what degree the anthropomorphism of the robot may influence the perception of race.
“Robots and Racism,” by Christoph Bartneck, Kumar Yogeeswaran, Qi Min Ser, Graeme Woodward, Robert Sparrow, Siheng Wang, and Friederike Eyssel, from the University of Canterbury, Monash University, Guizhou University, and University of Bielefeld, was presented at HRI 2018 in Chicago. You can download the full paper here.
Social Media