The Impacts of Big Data and AI on the Arts, Our Culture and Society

What are the implications of AI and algorithmic governance on culture and creative industries? To answer these and other questions, the Arts Management and Technology Laboratory gathered a panel of experts on campus at Carnegie Mellon University. Working across arts, media, data, and technology panelists discuss frameworks for understanding how power flows within and between these industries.

This episode of the Tech in the Arts podcast is a recording from the program, held in April. The discussion is moderated by Lead Researcher Ian Hawthorne, featuring:

  • Eleanor Mattern, Director of the Sara Fine Institute and a teaching assistant professor at the University of Pittsburgh's School of Computing and Information. Her teaching and research interests include archives and digital curation, community-centered information work, civic engagement, and information policy and ethics.

  • Emma Slayton, Data Education Librarian at the Carnegie Mellon University Libraries. Emma is an expert in data and AI literacies, GIS, and data visualization, she helps researchers and students develop their own data stories for publication and sharing. As a social scientist, she focuses on using computer modelling to analyze big data.

  • Samantha Shorey, Visiting Assistant Professor at the University of Pittsburgh. Samantha is a design researcher who studies automated technologies — such as AI and robots — in the workplace. In her research, she seeks to highlight the labor and innovation of people who are often overlooked in media narratives about new technologies.

Transcript

Dr. Brett Crawford 

Welcome to another episode of Tech in the Arts, the podcast series of the Arts Management and Technology Laboratory, also known as AMT Lab. The goal of our podcast series is to exchange ideas, discuss emerging technologies, and uncover things that arts managers, artists, and geeks like us want to know.

My name is Brett Ashley Crawford. I'm the executive director and publisher for AMT Lab. The following podcast is a recording from a unique experience we held here at Carnegie Mellon University. We held a data, cultures, and AI panel looking at the data's influence on both the making and the managing of art. We hope that you enjoy this robust and fun discussion, and if you do, please share the conversation or follow us on our socials on Instagram, LinkedIn, or Facebook.

Ian Hawthorne

Welcome everybody. Thank you for choosing to spend some time here today with us. As Brett said, my name is Ian Hawthorne. I'm currently the lead researcher at the Arts Management Technology Lab here at Heinz College. Today I'm joined by three really wonderful guests who I'm extremely excited to share some conversation with today.

My guests are Samantha Shorey, who is a visiting assistant professor at the University of Pittsburgh in their Department of Communication and a fellow at the Roosevelt Institute. Dr. Shorey is a field researcher studying automation in the workplace and how innovation shapes the design, culture, and labor of technology. Shorey seeks to highlight the contributions of people who are often overlooked in our dominant cultural narratives about technology innovation, paying close attention to the creativity and ingenuity of workers, women, especially.

Followed by Emma Slayton, who's the data education librarian at the Carnegie Mellon University Libraries. Dr. Slayton is an expert on data and AI literacies, and creates educational content to help students, faculty, and staff at CMU upskill in these areas. As the library's data visualization specialist, she teaches and consults on creating effective graphs and using data visualization software and programs workshops to promote the use of various data viz methods, tools, and techniques. Also, as an archeologist, Dr. Slayton has focused on using computer modeling to hypothesize the location of early canoe routes in the Caribbean. I found that fascinating, so I had to mention. Okay. 

And finally, we're joined by Eleanor Mattern, director of the Sarah Fine Institute and teaching assistant professor at the University of Pittsburgh School of Computing and Information. Dr. Mattern's current work focuses on library engagement with data about our communities. Dr. Mattern is part of the IMLS funded Civic Switchboard project, which aims to develop the capacity of academic and public libraries to be active partners in their civic data ecosystem.

Now, the broad aim of this panel is to investigate the changing landscape of the arts and culture amid the current innovations in generative AI, machine learning, and large language models.

While this panel is publicly advertised, our audience is composed primarily of practicing artists alongside students of the arts and entertainment industries. A field currently grappling with its own ability to share space with generative artificial intelligence. We live in a moment of heightened attention to our digital archives, labor being simultaneously exploited and eliminated by AI, alongside popular narratives of both utopian salvation and catastrophic extinction. It's front of mind for many. Now, machine learning might aid in unlocking the potential of our vast databases, but generative AI then encroaches on creative production and threatens the lives of many creative workers, from graphic artists to musicians. All this paired with the already divisive nature of social media and the growing trend towards algorithmic governance across the internet signals an unclear future for our cultural institutions and our artists.

Our information ecosystems, workplaces and creative practices are all witnessing great change. But as with the Telegraph, radio, TV, and internet, this has been a long time coming and will continue to become a part of our cultural tapestry. Questions of navigation use and cultural value of such tools must now be considered.

Today I'd like to honor the expertise of our guests with a small handful of questions investigating these intersections before opening it up to you all.

So with that being said, I would like to jump right in and ask if all of you could just tell us a little bit about your research, some of the positive and negative effects on broader culture that you may have noticed in your respective spheres, related to AI or big data, et cetera.

And anyone can start.

Emma Slayton

I'll go for it.

Ian Hawthorne

Excellent. Thank you

Emma Slayton

Yeah. Always first into the breach. So, the work that I do at Carnegie Mellon University Libraries is to think about how we, as the users of AI, but also the developers of AI, think critically about engaging with AI outputs as part of the narratives that we build for our research, for our lives. Because, it's not just using AI as part of your systematic review, it's using AI to be able to determine how many calories are in your chicken paprikash. And, one of the things that we're really determining is that, although there's a lot of people who are already using AI, we have students who are heavily engaging in their use as part of their coursework—either approved by the professor or not—and we also have a lot of people from the public who are engaging in using AI, but neither which really encapsulate a full understanding about what it means to be using AI as a tool rather than AI as an oracle, right? AI cannot tell you specifically any true answer to any question. It's a predictive method to engage with data and the data that is entered into these models can impact what it tells you.

The methods that people have used to code these models or direct the models, can also limit or make information in certain ways more accessible, depending on the preferences of that creator. So how do we enable people to understand these processes of information selection and creation so that, when they're talking about their use of AI to others, they're doing it in a way that's fully appreciative and understood by their audiences. And through that, we do a lot of teaching and outreach methods, hosting workshops for students, faculty, and also our community members here in Pittsburgh. So that's the work that's been occupying a lot of my time over the past month. 

Ian Hawthorne

Thank you.

Eleanor Mattern

Thank you so much for the invitation. My name's Eleanor Mattern. Like Emma, I have a library science background and now I'm in a role where I teach library science at the University of Pittsburgh. But in addition to that role, I also have a position in a new initiative in our office of the Provost, which is framed as Responsible Data Science at Pitt. And in that initiative I lead a program called the Responsibility Program, which is working toward building understandings of what responsible data work looks like in different contexts. And I think responsible data work, responsible AI work is very contextual, what it means to be responsible with data and AI is very dependent on the type of position or role, or even the task that we're working on as part of that role. We are really focused on thinking about frameworks that can guide responsible data work or responsible AI work. And I'll just share an example of how we're working to think about this idea of a framework. A framework being a guide that we can refer to, that we can use to evaluate our work. That is sort of serving as a set of values, that we're, you know, subscribing to. So one project of this responsible data science initiative brought together a group of undergraduates, so 16 undergraduates and in small groups, they worked on developing a responsible data science framework for a specific context.

So one context was: what does responsible data science mean in the context of the study of addiction? This is a focus of the Responsible Data Science initiative. What does responsible data science mean in the context of the use of student data at the University of Pittsburgh by you know, offices like our Office of the Registrar.

So these frameworks look very different again, depending on the focus of the work. So that's an example of the way that we're thinking about responsibility is very context dependent. And I also think there's an opportunity for us to think as individuals about our own relationships to AI and data work.

What sort of framework will guide, you know, ourselves? Like what do we view as important? For me it's thinking about issues of transparency or accountability and bringing people affected by the work into the, the project. So I think this idea of frameworks is something that we're really engaged with in our work with responsibility at Pitt.

Ian Hawthorne

Excellent. Thank you so much.

Samantha Shorey 

I'm Samantha Shorey. I'm an assistant professor at the University of Pittsburgh. And, I'm a field researcher, which means that I spend a lot of time observing and talking to people as they're doing their daily practices. And in particular, I'm a field researcher of work.

I got interested in AI and automation through an interest in the way that artists, crafters and makers we're using tools like 3D printing, CAD software, and laser cutters to automate the aspects of their work that were tedious, time consuming and subject to error.

Though artists are often talked about as being totally separate from the world of manufacturing, you know, we hear that age old dichotomy of art and commerce. The way that automation was being used by people in their artistic practice pointed me to a lot of questions I had about the value of automation to take over jobs that are repetitive and potentially boring or dull. And this pointed me to the current research project that I've been working on, which is actually about something that's pretty distinct from where I began. It's been a study of how AI is being automated or integrated into the field of trash management and recycling, sorting, which is definitely a surprise.

But what I've learned through this process has really brought me full circle through thinking about what we're having to contend with. This introduction around generative AI and the moment of AI that we're in. The field research that I've been doing in highly automated manufacturing contexts and in workplaces, waste sorting contexts, points us to the way that automation is rarely what it says it is. And I'm sure any of us who have used a self-checkout machine can attest to this, right? But, when we think about automation in a workplace, we have to think about the labor that is required to compensate for these short comings. The way that people have to step into correct machines, the way that people have to step in to improve them, and to calibrate them to the really complex and changing circumstances where we do our work every day. These are all contributions, valued contributions to technological systems that are often overlooked in our discourses. When we talk about the glorified work of open AI programmers and designers who are going to change the landscape that we're in, we often witness this bifurcation between the people that are creating these technologies and the people that are being impacted by them.

But what my work really points us towards is that we need to be thinking about design on a much longer continuum, where technologies are produced and then continually adapted, reinvented by the people who apply them in their daily work. And in that sense, there's a real locus of power in the practices that people engage AI with and also the norm building that we do as a community around the appropriate context of a AI use — what we're going to allow AI to do for us, and how we're going to allow AI to be built. 

Ian Hawthorne 

Thank you so much. I am so pleased with all of our panelists today. I'm very excited to get into further discussion and to open it up to the class eventually, but I would like to interject a little bit that I've noticed some trends, and this is part of why I picked everybody here, was because all of you have this sort of exposure to archives, very broadly speaking.

This idea of, I actually love that you brought up trash because I have this weird thought sometimes that trash is really just “that, which we haven't chosen to catalog” — but then you catalog it and you work with it, you know, in an AI system and it brings about a ton of questions, you know.

There's a really interesting theory out there by a theorist, Joanna Drucker, who talks about data. Many of you might know her. “Being” the Latin for “given” — it's that which is given, which tends to give data a lot of that implicit power that was discussed. You know, we see a data sheet, we see a data point, and it's a fact. And it's a fact. And it's not actually a fact. It's simply what's given. And she actually advocates that it may be reworded as “capta.” Meaning that which is captured. And I think that aspect of visibility and power is something I would love to hear more about going forward, because I see it as sort of latent in all of your practices.

But moving on to the next sort of main field of thought I would like to investigate as in each of your opinions, what do you consider is sort of the current position of all of our cultural institutions under this new ages of AI? Those can be, you know, from libraries. Many people in this room are really invested in things like museums, civic centers, all of which really engage with this sort of archiving process in just different ways and different typologies. And so, I'm just interested in each of your perspectives where you think their current threats are, or opportunities, or if there are maybe even any new cultural institutions that are arising amid these new changes too.

Eleanor Mattern 

Well, I can share, and I think maybe Emma and I will have some overlap that, you know, libraries, academic, public libraries have long it's worked to build literacies or fluencies in different aspects. So information literacy or information fluency. Helping patrons to be able to find, use, evaluate information is a cornerstone of librarianship that has extended into data literacy. It has extended into AI literacy where the same need exists for libraries to be able to support patrons and building understandings of, well, how can I assess whether this is information that I can trust.

What do I need to know about what's happening? You know, what's behind ChatGPT so that I can evaluate whether I can trust this response? So I see a major role for libraries there. I do see a thread in the sense that, as Ian mentioned in my introduction, that the Institute of Museum and Library Services funds a lot of work, including a lot of really important work arounds AI literacy. That is an agency that is really being threatened at this point.

There was an executive order that said it would be dismantled. I've heard other, other messaging since then that it's being re-envisioned. Many of us with grants have received termination notices, that support this type of work. So you know, that's an existing thread. I think libraries play really central roles here.

I also think, just one additional comment, that there's an opportunity for — I do a lot of work around data literacy, but growing capacity of librarians to work with data in different ways, in ways that make sense for their communities. And sometimes we hear some, you know, intimidation around those types of roles in libraries or this idea that, oh, I'm not a data person.

This is a new area for us. So I think we also need to do some capacity building, which really means helping folks to see that this is an extension of what they always have done right. And that they're well positioned in libraries to provide that support. So those are — and I am sure you have a lot to add.

Emma Slayton

Oh yeah. Chomping at the bit. And I think what you're talking about is very true in that libraries do have this long tradition of not only providing access to information, but trying to help people provide context to information and understand what they're working with when they find a new resource or they find something that's written or a piece of “capta” that's collected that they want to understand.

And, that is the same as we move into AI literacy, because just as you say, a lot of librarians are having to reorient how they think about themselves to be data experts, because we are, we all work with data every day in our lives, whether we think about it consciously or not.

From the research that we do to the fact that Netflix knows more about my viewing interests than I do. And, I think that when we are thinking about AI, it's the same phenomena in that many people are worried that we don't have a full understanding of”what does the tool mean? How can we implement it? What are we using it on?” People are beginning to use it and engage with it. It's going to be a consistent part of life, both in terms of our work, but also the cultural practices that we engage with. And having libraries there as a force to be an introduction tool to these literacies fits very well, as you say, with what we've been doing in the past.

And also I think is the encouragement we need just to move forward and try to think about the critical aspects that we've always thought about with data, which is who is creating it? Who is collecting it, and why? What are the goals of that process? And it's the same. Who is building these AI tools and why?

What information have they chosen is important to include? And to get back to your question, Ian, about our cultural communities and resources, is that when we're thinking about the development of these AI practices, who is included in that data? I think that's a huge issue for the cultural impact that AI is going to be have is thinking about these training data sets and who is represented within them.

And beyond that, why are certain resources being prioritized over others in the back ends of these AI models, that drive people towards specific questions. I don't know if many of you have played around with ChatGPT, gosh knows I have. Sometimes I'll, you know, try to see what it will return, not because I think that it thinks anything, but it's a predictive model and how they curtail that predictive nature can be an interesting assessment of the values of the people who developed it. And sometimes if you try to ask it political questions, “oh, I can't talk about that. Yeah, I can't offer an opinion.” And then you rephrase the question, it's like, “well, let me tell you what I think about this.” And so clearly there have been some parameters set up on the back end that are trying to remove AI as a cultural informer. But the more that people are engaging with using AI, and we are all, as we use AI, also training AI at the same time to our cultural benefits. So there are those issues in terms of how we view that AI as a cultural phenomena that is impacting how we even view the way that we think about information and engage with data. I think that, as you say, there are a lot of institutions that are now going to be using AI. We were just talking before the podcast started about how people are starting to integrate AI into different workspaces, right? You mentioned trash collecting as well, so there's a lot of different communities — within local governments, federal government institutions that are taking on these AI tools without maybe thinking critically about how those have been trained or what they've been trained on.

And, I think the one thing I'll say —because I've also talked way too long, which is my habit — is that when we think about AI and the information that it's been trained on and how that impacts cultural use is that we are now in a space where the government is going to be limiting access to data or privatizing access to data. We see this as one example in our spatial information. So if NOAA, our National Oceanic Atmospheric Administration provides a lot of information on our weather services for free to the public, but they have also started to crack down on what information you're allowed to download. Or, what information is findable or accessible. So it's not just about the information that's captured and used to train these models. It's about the information that's been made accessible for us to effectively train or understand the outputs of AI in a way that's going to be contextualize to the culture that we're trying to understand.

Ian Hawthorne 

I actually have an interjection just because it popped into mind when you talked about the parameters of its outputs and how it's trained. This is a total aside, but how many of you know that in Google, if you do a search, it gives that AI summary at the top. If you look up something with a swear word in it, it does not generate. You can skip that. Yeah, so it goes back, it winds the clock back like five years. You can also do like a negative AI because it works boolean. 

Emma Slayton 

Well, and, and my favorite for that too is a while, for a while Google was pulling data from one of our largest free information resources, Reddit. Which is a great way to learn true facts - from Reddit. And, I think there was one joke going around for a while that you could ask, what's the best way to guarantee cheesy pizza? 

Ian Hawthorne

Yeah. Oh. Mm-hmm. 

Emma Slayton 

And it's, you know, if you glue the cheese to the pizza, that's a great solution. And when you trust sarcastic people on Reddit to be honest with you, that can lead to a lot of issues. So yeah. The way that even people are, are shown this information, like that's a huge cultural movement. Absolutely. How we access, because people are treating ChatGPT like Google. And ChatGPT is not Google. It was not built to be like Google. To be using it like Google. And my dream is one day, that instead of having the one response at the top of the Google search bar is that we'll two. And saying the AI is showing you two different responses. How would you like counter evaluate which one fits your needs better.

Ian Hawthorne 

Give a review?

Emma Slayton

Yes, absolutely, because even if you're shown two options, then you're going to stop treating it like, “ah, this is fact that has been returned to me.” It's like, “oh, there's variance.”

Ian Hawthorne

That's interesting.

Emma Slayton

And it's free AB testing. I mean, I don't want them treat us like the product. So how do we get into that?

Samantha Shorey

When we think about the opportunities—and this has been so generative thinking about this—the opportunities in culture, especially in cultural institutions, one of the things that I think is really important to keep in mind is that we have culture out here, like culture in our world, but every workplace and every organization also has a culture. That culture is far more within our ability to modify or to co-build. Because it's smaller, right? And because we're directly connected to the people who are building it and a lot of the time our smaller organizational cultures and our smaller workplace cultures. I see this in all of the sites that I research. They're driven forward by an understanding that they think is out there of the big culture, and they make decisions backwards from that. They're afraid that they're going to get left behind in the AI revolution if they don't just adopt a tool. If they don't just apply it to their work, they'll be living in the stone age because culture out there will have moved on.

And when we fall into that FOMO thinking, that “fear of missing out” thinking, we let go of a huge locus of our power. And when we think about museums and artistic organizations, these are people that have a huge stake on whether or not these tools should be used in this type of institution.

Because we know that these tools are trained on work that has been stolen from people like you. And so when we look at things like the Hollywood Writer Strike, we see a type of work, an institution where people have come together and set very clear parameters around how AI can be used in their field, in their work, in their organization, in their culture.

That really points us to some of these questions that we need to be asking ourselves. And, this kind of coalition building and this collaborative work that we can really do that actually creates AI opportunity space, not just an AI inevitability space where we just have to take it on and move it forward, but instead we can ask, what is it that this could be useful for?

How could this aid our work? And how might this actually very much hurt the goals that we are trying to achieve within our institution? 

Emma Slayton

No. absolutely. And thinking, I never thought my archeology degree would have so relevance in this conversation, which is hilarious. But the way that we think about archeology, right, is that you've developed a tool and now you're learning as a community. How do we further develop that tool? How we use it in different circumstances?

How do we train others on the tool or, or associate them to our process, which is exactly what you're talking about in the change for automation. How are we incorporating AI into our practices? How are we using the tools in ways that it's going to be more effective for us? And also, to your point, how are we making sure that, as users, we're being very critical about how we employ it? Because we understand that there are certain ethical or practical concerns that come with it, and it's that thought about how we can make the AI systems work for us, how we can shape the spaces where we use AI, that's where we can leverage real success, especially for artistic communities that have so much stake in the game, as you said. 

Ian Hawthorne

Mm-hmm. I think it's interesting, too, that all of you at some point have touched on so far this sort of the way we frame the narratives being told around AI and the stories we tell about it. Because I think anybody who's reading the news or participating in culture knows that the moment AI came to be, everybody was like, oh, it's doomsday, or, oh, it's utopia, or it's like, or it's inevitable.

This shift from inevitability to opportunity I think is really important from. Even just the storytellers perspective. And I'm wondering if that could be an avenue of real importance for artists and storytellers going forward in terms of framing our use of it, being proactive about how we talk about it too.

So I find that fascinating. I have one last sort of more lighthearted question before — I do want open it up to the room because I'm sure other people have plenty of questions, even though I have about a million more. So I'm just interested, what's the most fun thing you have done or have seen done with any AI tool recently before we open it up? 

Emma Slayton 

Okay. So I teach an AI literacy workshop with a colleague here at CMU, at Nikolas Martelaro from HCII, and we have developed some fun activities that you can do at home to test your AI skills and also the skills of AI to actually solve your problems. And, one of those ways is by actually opening up some different AI tools. So we were using ChatGPT, Gemini, Copilot. I also got into Claude, and ask it the same question. The same prompt and see what the responses are and see the different things that they'll point you to the information that it knows or it thinks it knows if it's hallucinating. And then thinking about, you know, how that framework might have changed the response to fit your specific needs. A great example of this that we tested out with our students, and I'm sure some of you in the room, are listening might be aware of is the r and strawberry question.

How many R's are there in the word strawberry? If you ask AI this, it cannot tell you. Even if you ask it to spell out strawberry, which it can do, it just can't find the missing third R, except for Claude, which has now gone in in the back end and said, for fun, if someone asks you how to, how many Rs are in strawberry, you should say this. The AI tool doesn't know how to do it. But a developer went, we should probably fix this, beause people are doing this now. 

Ian Hawthorne

They had to hard code it 

Emma Slayton

Yes, absolutely. And I think it's a great opportunity for anyone who's interested in, you know, evaluating the different AI tools that you use.

Try changing the tool, but keeping the same prompt. Think critically about what you're actually trying to get out of the AI, who it's speaking to and how a fun game you can play at home.

Samantha Shorey

I'm an amateur lepidopterist, which means that I love butterflies and moths. I just moved here from the southwest region of the United States. I was living in Texas before this. We had amazing butterflies there, I'm waiting to see what kind of butterflies we have here in Pittsburgh. But, I love the bug identification feature on my iPhone photos. The only thing I've used it on here in Pittsburgh so far is to identify a house centipede. So, I'm very hopeful that the butterfly identification will be useful here in the summer. 

Emma Slayton

I hope so too. And they're not just all lantern flies. 

Ian Hawthorne

Those ID apps are so fun too. My, my father-in-law likes doing it with like plants or like bird songs. There's ones where you can record what's singing. we were just sitting out on the porch this weekend and he was like, you know, this is singing right now.

And I'm like, I don't know what bird that is. So…

Eleanor Mattern

Merlin. Yeah. It's that. 

Emma Slayton

I use it a lot too. too. It's so good. Cornell. It's really fun to do.

Samantha Shorey

Sometimes I'm like, technology is good, you know? 

Emma Slayton

When you're using the right tool for the question. Exactly. Yes. Yeah. 

Eleanor Mattern

Yeah. The Cornell apps for birding also were such a good learning tool for me. When I was starting to Bird. But yesterday, I was, you know, thinking about the opportunity to meet with all of you. And I knew, oh my gosh, I'm gonna meet with a group of artists tomorrow. And I was thinking about AI through the context of the arts, and I am a very amateur, novice weaver. Is anybody weaver? Yeah.

And I was curious. You know, with weaving, you know, you work from a draft, you work from a pattern. And I was curious, how ChatGPT would — if it would be able to produce a pattern. And you know, I prompted it and it offered back a, a pattern. It even was able to offer it in the, in the format that the draft is, you know, we're accustomed to interacting with. But then I was reflecting on this idea of literacy because I'm a very novice weaver.

So I looked at that and I thought, oh wow, it was able to do it, but I don't have the expertise to evaluate, without actually sitting at a loom and trying it out, evaluate that output. So I, you know, maybe somebody who is more experienced weaver would be able to assess that, but without that sort of literacy, you would have to, you know, give it a try.

And, it might not be an actual, like a pattern that would work. But um, yeah, that was my little experiment, so it was seemingly able to do it. I don't know if it was actually able to do it. Yeah, yeah.

Ian Hawthorne

Well, and that's so important too, like we're talking about this whole kind of AI is not a one-sided thing, it's a cyclical process, right? And I love that you brought it up in the context of learning a skill, particularly in art, right?

Because like, I'm sure a lot of people learn art on their own, you know, through YouTube videos or they're just practicing. But it's like, it takes that sort of iteration to really feel that you're succeeding at that instead of just saying, oh, AI told me how to do this and I can do it now.

Yeah. That's lovely. So thank you all for answering my questions. Like I said, I would love to pick your brains more after this, but there are about 15-ish minutes left. I'd love to throw it out to the room. If anybody has any burning questions they'd like to ask, we can start taking them. Yes. 

Vivian Ma

I'll try to frame the question. When we're thinking about like scientific thinking in general, or data, we are relating those terms with capital "T" truth. On the contrary, when we are thinking about arts and culture, we are thinking about multiplicity, or something completely different to the words that we would describe science and data. Nowadays that we've seen a lot of examples of artists utilizing ai, utilizing data and technologies imbuing technologies into the arts. But I'm always curious why wouldn't there be another way around? Why aren't we imbuing arts and humanities thinking into the development of AI models, of data literacy? I wonder what's your take this and what you think would be the impact when we bring those uncertainties and diversity thinking into something that deemed scientific? And what will be its impact back to the arts and culture? 

Eleanor Mattern 

It's a great question. I think bringing a critical lens to AI is important. It's a domain agnostic, you know, important thing to do. Like whether you're a scientist or you're work, you're an artist like that critical lens to AI is really important.

I think, you know, as a practice, I designed a course on responsible data science for Pitt. It has a new masters of data science. And something that we talk about a lot in that course is the idea of documentation throughout the life cycle of development of AI or doing a data science project. And I think that there are lots of different sort of ways to document, you know, processes. One type of documentation that has gotten attention is the idea of a data sheet, which talks about, brings a critical lens to what the data is what decisions were made for the creation of the AI.

And that sits beside, you know, the AI or the algorithm so that somebody could get behind it and understand and be brought into the processes and the decision making. So, I mean, I don't know that I'm answering your question in a very precise way, but I think there are strategies that we can use to bring transparency to the process and that scientists and computer scientists, developers can use that allow somebody to bring that critical lens.

To their work. And I think documentation, I mean, I'm a librarian. I love documentation. I think documentation's a good way to do it. 

Emma Slayton 

And, and also a thing that not always is focused on. Yeah. When we're talking about new technologies, I think one of the struggles that we have between focusing on what might be the development of these scientific tools as we could put them, is that, you know, we're always trying to develop the next thing. How do we improve? How do we make it better? What next can we do without thinking back towards what have we done? Did it work? Who did it work for? Why? And that's something where you can have with these.

Like data management principles to discover exactly what worked, what were the different versions, how did they iterate. And, I think that that's something that we can see related to your question of culture in some of the development of AI, specifically AI image generation. Where you had a circumstance where because of the training data sets that exist and people just trying to train something as quickly as possible so we can get the tool out, because the tool is cool. Oh no, now all of our AI are only making photos of white guys. What do we do for this? Okay, well a solution to that problem is that we'll code it hardwire into the system because we want to show diversity. And diversity is cool that we'll have all the photos become diverse. Well, great, except for now someone's asking, can you make me a picture of a Nazi?

And you have a picture of a black Nazi, which, for those of you who aren't familiar, is historically anachronistic. And, sometimes we have a gap between the process of what can we do? And what should we do? And I think that's where the humanities thinkers, thinkers from the arts can really pay great dividends.

Because not only are you coming through and thinking about process, but you're thinking about context. And that's where we as people in the humanities, can really help to guide future use of AI or the future development of AI if we can get in the room, because again, we're struggling against that instinct to do what's next. 

Samantha Shorey:

I can take another question. I have thoughts on this, but I want to make sure everyone get to ask questions.

Christine Rivera

I'm curious as, as many of us are preparing to graduate, we're in the “launch” mode in so many ways. And I'm curious what are your thoughts on getting into those rooms? What the things, broadly speaking, that we  as artists, creatives, and also managers, thinkers can do from a cultural level at a workplace, policy level, like what are the things you see being really effective and priorities? 

Eleanor Mattern 

Gosh 

Emma Slayton

I think a lot of it is down to relationship building, and I know that is the tedious work, but showing up to spaces where, you know, people are having these conversations can be good. Pointing out the flaws in systems. So the whole reason there was a image of a black Nazi is because people complained because it wasn't diverse, right?

So we can make our voices heard through social campaigning, through reaching out to companies and marking our distress. I think speaking of cultural institution, places like museums that reach out to communities to begin with, so there's already community focus or great ways to try to launch or modify programs to better fit multiple communities. I also think that places like academia where I can go to the HCII department and be like, hi, you guys sound fun. Wanna work together? Can be a great way just to find these spaces where, where you can collaborate. And I think in that drive for developing community and figuring out who is in the room and trying to reach out to them, which is just hard. It's not easy. And I wish I had a better answer because then I would be in all the rooms. Yeah. But 

Eleanor Mattern

Yeah, getting in the room with the folks who are developing the technology, I think is, maybe also what you're, you're speaking to. And it can be exhausting to be advocating and saying like, we need to be in the room.

Like, you want to be invited in. But I do agree with Emma that I think demonstrating that we have a perspective that is important in different ways. individually by bringing our perspectives, but also at the types of organizations that we might find ourselves in, like museums to you know, share our position is a strategy, but I don't have an answer to how do we get in the room with the developers. I'd be curious if there are thoughts from the group.

Samantha Shorey: 

So, uh, before coming to academia, I worked, briefly at the tech startup Airbnb. And, I took so many lessons from that experience, but one of the things that I took from that experience was that, you know, we had, of course, amazing and totally built out technical teams to make apps and things like that. And the website - it's a tech company, right? But, you know, and then we had the statisticians that took user data and crunched it. But then we had the research and design arms, that were filled with people who had degrees in comparative literature and studio art.

And a lot of them had, you know, had the opportunity to get in the door, you know, when the company was building. And so, the entry for admission wasn't as high. But everybody in this room is coming from a institution that is the gold standard when it comes to technology companies training.

They're gonna recognize the institution you're coming from CMU. They know that. And so, I think that, understanding of that about a little less than a quarter of jobs, in technology companies are non-engineering jobs. They're non-technical. And so, thinking about what types of opportunities you have for being, what they call cross-functional partners, working in design within research, are great ways to start being able to work in that conversation.

And to move the needle in a way that, is incremental, if that's something that you're comfortable with, my husband always calls it going in through the front door. And so yeah, the being in collaboration, and in this space I think is something that is more available than perhaps you realize with that CMU moniker.

Eleanor Mattern

That's so great. And then I, I would also add that I think there's a responsibility in the part of computer science programs, programs that are training technologists to bring attention to how important it is to have other voices represented in technology development. And that includes. You know, voices of people who are impacted by the tech, and involving them during the process, during the life cycle of the AI development, in deployment and testing. So I think that there's a responsibility, schools like those that we're all part of to also integrate and bring attention to how important it is to have other perspectives formally in the room and also engaged during development.

Ian Hawthorne

Those are really excellent points. I'm wondering how many arts management students are now reconsidering new paths in tech. Possibly. yes. 

Student

I'm curious to hear your thought on how AI is going to change the nature of work for artists. So we are seeing AI  being used to creating arts, creating music and so so on. So how do you think used the way that artists work is going to be changed by this technology? And do you think it is going to help improve the creativity of artists or do you think it's going to be counter-intuitive? 

Samantha Shorey

I think we're looking at a trade off around repetition that's just kind of what I, the little piece, I'm going to focus on now.

AI offers us an ability to automate aspects of our creative process that are repetitive, because these technologies work best in situations that are, uh, well-defined and, um, similar over and over again, right? And so the, our repetitive actions are well matched to a tool that can act in a repetitive context.

But, so that there's that benefit. One of the, you know, you guys are gonna hear this over and over again.

Now that I've said it out loud, you're gonna like see it everywhere if you haven't already, is this idea that by automating our repetitive work, we free up our cognitive ability to do higher value labor, more creative labor, do the parts of our creative process that we love doing. I actually think that that sounds great. But, one of the things that we have to ask around repetition actually came up for me when, uh, Nora was talking about weaving. Um, and it's actually something I also heard. When I was speaking on a panel in Washington DC and they were talking about how there's this big concern for, judicial law clerks, because now this repetitive work that clerks do can be automated.

However, it's by doing that repetitive work. Putting in the reps, as they call it, that we learn about any process. And so there's this idea that, um, rep. Especially and craft work teaches us this best of all art does, is that by doing these things that are repetitive, that's sometimes where we access flow and we unlock greater ideas.

That's sometimes how we learn about how materials work. And we understand experimentation. And so I do think that there's a real trade off that we're not talking about enough in the artistic process about the value of just automating what is repetitive, because repetition is part of the process. And so that's the big thing I'm thinking about, about how art artwork might be changed.

Eleanor Mattern

I just wanna surface, um, a study that was done actually by somebody who's part of Heinz College. Kaitlyn Wilson, who's a PhD student here. I think in information systems, but she did a qualitative study with artists, asking these types of questions, and I'm sure, you know, there are perspectives. I'm not a working artist, so I feel a little out of my depth actually speaking to this question. And I bet there are folks in this room who would have really important and thoughtful perspectives on this.

I know that they would, but this study speaks to, as Sam was speaking to, the things that are removed from repetition here and what isn't captured in AI work. And they talk, the, the artists that were interviewed really talked a lot about, um, agency of the artist is not present in AI, any sort of intellectual control or form of expression really is not there, it's just derivative. Right? And not even, and not derivative in a way that we're, we're learning from creating derivatives like, you know, was a practice of learning by creating copies. And in, in art we're not learning through the act of copying. Right.

That's being done by generative AI. So I would just point you to that study as maybe something that would. Offer some interesting perspectives. It's in a journal called First Monday, and again, just a current issue. 

Ian Hawthorne

We have to wrap it. So thank you all for all the questions that you've been able to bring to this. This was one of the most illuminating discussions I've been a part of recently. Thank you for the panelists for joining us today and choosing to share a bit of your time with us, as well as your knowledge.