Considering Biases in AI and the role of the Arts

Note: This post is Part I of a two part series on the issues surrounding AI and bias, the role the arts play in addressing these issues. Part II can be found here.

In conjunction with the Miller Institute for Contemporary Art’s latest exhibition, “Paradox: The Body in the Age of AI”,  the gallery hosted a symposium on January 28, 2018. The symposium, “Paradox: Frames and Biases in Art and AI”, promised to deliver a panel discussion that explored both how bias is built into artificial intelligence and what the role of art is in relation to AI and bias. Held on the third floor of the Miller ICA, the event was well attended and by the beginning of the panel there were more attendees than seats available.

Entrance wall text for the exhibition at the MICA.

Entrance wall text for the exhibition at the MICA.

The event was moderated by Eunsu Kang, a Korean media artist specializing in the creation of interactive audiovisual installations and AI artworks. The five panelists included Manuela Maria Velos, Alexandra Choulddechova, Sey Min, Jillian Mayer, and Kerry Doran. Following their introductions, the discussion centered on three predominant questions;

What do you see as the most pressing issue arising from the rapid development of AI technologies in general and/or in relation to the creation of new artworks with the assistance of AI? 

Relatively quickly the panel brought up the issue at the center of the evening, bias in artificial intelligence and machine learning. Alexandra Choulddechova, addressed some of her current research in field and expressed that artificial intelligence is inherently bias because there is bias already in the data used to create the mechanism. The AI itself is not biased but the people creating it and the data being used to create are subject to implicit or explicit biased. Like a child, AI begins knowing nothing, but grows and learns based on the teachings and inputs from others around it. Thus, the panel reflected that in order to eliminate the bias completely from AI we would essentially need to identify the exact mechanism by which bias occurs in ourselves.

Along with bias, the panel also discussed the issue of humans as users and creators of AI technology. Manuela Maria Velos brought up the point that we cannot control how people will use this technology. There is little regulation and control in some respects to how this technology will be used by humans. A piece of technology may be designed to benefit, but could still be used in ways that also hurt depending on the person in control. The spectrum of what we can do with technology is vast and what matters most is our intention.

Why do you think we need to acknowledge that issue?

Alexandra Choulddechova also mentioned that, as AI has continued to develop, there have been instances of bias found within the mechanism. This past October, Amazon was reported to have developed and scrapped an artificial intelligence recruiting tool that displayed bias against women. The recruiting tool was trained to identify patterns in previously submitted resumes over the last 10 years in order to filter applications. Unfortunately, based on the data set, the system taught itself to prefer male candidates. Alexandra Choulddechova suggested that because Amazon had a culture that appeared to value male employees, intentional or not, it may not be surprising that the system using that data reflected a bias. She also reflected that machine learning cannot tell us whether it is the environment that needs to change. Those are questions that need to be discussed during the design and development phases.

AI has also spilled over into the realm of facial recognition. A 2018 article from the New York Times reported that a study of facial recognition software used to identify gender from photographs found error rates as high as 35 percent for women of color. In contrast, white males had error rates below one percent. Thus, using these imperfect mechanisms can lead to bias created results. The panel also noted that combating this type of bias requires active intention.

Do you think the power of art and creativity can help solve the most pressing challenges connected to AI’s increasing influence on society?

When asked to comment on the responsibility of art in addressing these issues, panelist Jillian Mayer expressed that she didn’t necessarily feel it was the responsibility of artist to help solve all the problems. Rather, she expressed, it is the job of artists to continue to raise questions regarding issues in AI. She also expressed that art is also one method of bringing these issues to a more public audience, since art can be a lens by which view technology. Art is used as a way to make technology and data human again as it has the ability to look at something differently. The aim of artists dealing with AI and technology is to show use the issues that may be invisible to us. Like understanding that once humans are involved perhaps there is little to no way of creating AI systems with pure objectivity. This created an opening for audience questions and an open forum discussion with the panelists.

Based on questions from the open forum, audience interest tended to lean towards the future of AI and its implications for creative expression. The panelists discussed that AI in its current form lacks certain qualities of humanity, which they felt contributed to what gave art value. This is the difference, in their thoughts, between AI being able to create good art versus great art. With this in mind, they explained that it is theoretically possible for AI to be creative and create great art as long as we can either introduce chaos to the AI system or clearly define “great” as a mathematical algorithm.

Interactive work from the Paradox exhibit.

Interactive work from the Paradox exhibit.

At the conclusion of the panel, audience members were encourage to explore the exhibition, which included several interactive pieces. Curated by Elizabeth Chodos, the exhibition ran from October 5, 2018-February 3, 2019. Featured artists included, Zach Blas, Brian Bress, Nick Cave, Kate Cooper, Stephanie Dinkins, Jes Fan, Claudia Hart, Eunsu Kang, Jillian Mayer, Sarah Oppenheimer, and Siebren Versteeg.

Read Part II of this series, Using AI Powered Art to Increase Social Equity, here.

 References:

[1] Jeffrey Dastin, “Amazon Scraps secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 9, 2018. Accessed February 10, 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

[2] Ibid

[3] Ibid

[4] Steve Lohr, “Facial Recognition Is Accurate If You’re A White Guy,” The New York Times. February 9, 2018, Accessed February 10, 2018, https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.

[5] Ibid