AMT Lab @ CMU

View Original

Ghostbots & Griefbots: The Morality of Digital Permanence

In this episode, Liz and Summer Social Media Manager Maraika Lumholdt speak with Jacob Gursky, privacy engineering student at Carnegie Mellon University, about Ghostbots, Griefbots, and their role in the Arts. The conversation covers the complexity of these digital creations, how they are used in different settings, and their morality.

See this content in the original post

Jacob Gursky: [00:00:00] It's amazing. Like we live in the future and, and it's okay to take joy in that sometimes and be aware of the consequences.

Liz Forrey: Welcome to an interview episode of Tech in the Arts brought to you by the Arts Management and Technology Lab at Carnegie Mellon University. My name is Elizabeth Forrey, the chief editor of research and I'm joined today with our summer social media and marketing manager Maraika Lumholdt.

Maraika Lumholdt: Hello

Liz Forrey: to speak with Jacob Gurksy, student of privacy engineering at Carnegie Mellon University and former researcher at the University of Texas.

Maraika Lumholdt: Today, we're talking with Jacob about ghost bots and grief bots. Liz, can you tell us about the technology that powers ghost bots and grief bots?

Liz Forrey: Yeah. So generative AI is basically a type of artificial intelligence that, um, uses machine learning algorithms that basically generates artificial content, fake content, but is real quote unquote, because it basically takes existing audio files, media files, videos, images, or anything of that sort. Um, Of a specific individual and, you know, analyzes them, making new content that is believable and hyper realistic to what that person would have said or done or looked like.

Um, so basically generative AI creates something that is new content, but is completely believable to be content related to the person it has been curated from. But could you tell us a bit about, you know, ghost bots and grief bots, um, because that's what generative AI is applied to and kind of what that means.

Maraika Lumholdt: Yeah. So ghost bots are taking those inputs that you just spoke to and creating new content via AI. [00:02:00] But specifically around someone, a person who is dead. So we're essentially creating a digital AI powered version of them today after they're no longer physically alive. And in the case of grief bots, they're used specifically to help grieving people.

So grieving people can create a digital version, a grief bot of someone that they love. That they've lost. So all grief bots in this case are ghost bots and then some ghost bots are grief bots. So let's hear more from Jacob Gurksy who researched these applications.

Liz Forrey: We'll just start off with our first question that we have. So can you, um, take a moment to explain the technology framing your research into ghost bots, uh, which is generative AI, um, with a couple of examples for our listeners?

Jacob Gursky: So generative, uh, text and generative AI, that examples that really stand out to me, um, one would be. Uh, [00:03:00] there was there's, there's a project called AI Dungeon which was a generative text model of, um, almost Dungeons and Dragons.

Like you could create any fantasy world you want and explore it. You'll say explore or, or like stab the troll, make friends with troll and it responds, but that's all pre-written. So basically you can build these models of, um, That generate the adventure for you. In this AI dungeon world. And there was, it sort of led to this like ideological war of like the more we put into this AI, the more it's gonna produce things like this.

And, and also maybe it's producing some things that are misogynistic, and is that just a reflection of the people who are using this? And, and if it's this really interesting case study, but that's an example that stands out to me.

Um, and another one that really sticks out to me from the time I, I worked in at the University of Texas for a while. We hosted this event called cog sec, which, um, I definitely recommend [00:04:00] looking at it. but it was called cognitive security and we had all these guest speakers come in and talk about protecting people from systematic disinformation.

And one of the speakers was. Like, I think he, he did this during his freshman year as like a final project and it kind of gotten in some trouble, but what he did was when regulators are making new regulations, they have to post those things online for a certain amount of time for public comments.

And so what this student did was he generated like, I think it was like if there were 80,000 comments made on this law, 60,000 were his or something like that. And he was able to generate unique comments that were with valence. Of like this law is terrible or this law is great and he did it without permission.

And then he, um, but he marked them all. So he wrote a letter to them afterwards and said like, Hey, I did this. You can control F for like this specific text string and remove all the fake ones. Ideally, you know, you get permission for that sort of thing. I think that's probably more, he was probably more gray hat than white hat.

 [00:05:00] Yeah, he, he, he, he proved that this could be done. And, and as sort of like the counterpoint to this, like the benefits of the technology is that you can generate entire fake articles sometimes or fake, um, articles that look real in the beginning and people never read through the first two paragraphs and you don't realize it or generate these fake comments as an attack vector, um, to sort of usurp our democracy, which is a, a phrase I hope doesn't come up again, but probably will.

Liz Forrey: And I guess kind of on that realm of things, you've obviously talked a little bit about like the harm it does do, uh, generative AI. And I'm just wondering, like in your research, um, have you, besides the examples you've just talked about, um, can you talk about what ghost bots are specifically and how they are different from generative AI itself or why they're a different class of generative AI. Um, and if there's any psychological problems you have uncovered in your research regarding this?

Jacob Gursky: Yeah the idea is that you create a version of a person that you could speak with after they've passed away. [00:06:00] And, uh, if you've seen Black Mirror, there's a really great episode about a woman whose husband passes away and then like she, by the end of the episode, she's recreated him.

Liz Forrey: I've seen the episode. It's scary.

Jacob Gursky: Yeah, that, that episode doesn't become really fiction until she like 3d prints him a new body. Basically like the technology for the, for that is in development and, or is, is, is possible. So there's a couple companies that do this. Or are working on it. So the three main players that I found were Heareafter AI, Eterneme, which was sort of a startup that kind of just disappeared from what I could tell, and, um, Replica and they're all sort of different in their execution of this, in that I think comparing would be really interesting. And then there's another one that I found called Mindbank AI.

Hereafter was somebody whose father was passing away and he wanted to preserve a way to interact with him, for his grandchildren. And then Replica was somebody whose partner passed was, uh, passed [00:07:00] away. And, uh, with Hereafter, it doesn't seem like there's that much actually generative going on. It's more like a really, really advanced, uh, diary sort of like you can ask a questions and then it will respond, but it's sort of this, this branching tree of prerecorded responses, doesn't really, from what I could tell it doesn't really generate conversation.

Um, whereas Replica is more generative. Um, but Replica sort of has this weird middle ground where it doesn't it directly say it can recreate you. It just sort of says it will create you a friend. And so people who pour time into Replica get stuck there. Well, I don't know if that's the right word for it, but they, they really get intimately connected with this, this, this digital person.

And it's because a digital person is a feedback loop with them, like learning from them and, and, and, and becoming their friend. But he, my take on replica with that is that the marketing's kind of careful to not claim that it's [00:08:00] recreating someone directly. Um, whereas Hereafter is, so I feel like the technology, at least in like the commercial sector is in this weird middle ground where the generative stuff isn't quite there, like technology exists to do it, but having it be feasible, the generative stuff is not quite there to create a generative model of your loved one. And, but you can still pay to create a, um, a sort of advanced diary.

There were different tiers of how much of your loved one you could save depending on how much you were willing to pay, and you could pay flat rate, but there were also like subscription services like Netflix. And it makes me wonder like, like, like if I, if one day, like I have to choose between paying my Netflix subscription or being able to talk with my deceased grandparent, I, I don't know how I feel about that.

So I sort of see these apps as like, the like everything sort of right now, the technology isn't bad, but they're…I already see them falling into the traps of the business [00:09:00] models and incentive structures that have defined the technology we've built so far

Liz Forrey: Wow. That's thank you for that. yeah, a lot to think about.

Maraika Lumholdt: So you've already touched on some of the moral and ethical implications around some of the applications of generative AI, but it does seem like there are differences perhaps in the application of, of how it's used. Like I know something we've talked about is Dali Lives at the Salvador Dali Museum, which, for listeners who are unaware is an AI-powered recreation of the artist Salvador Dali that visitors can interact with.

So that's used in like an educational setting to help people get an understanding of this historical figure, an artist. But then there are other applications, like you touched on with some of the apps that are used as like a companion and used in a much more private setting. Do you think there are differences in morality amongst those different kinds of applications for generative AI?

Jacob Gursky: Yes. Uh, I, I think [00:10:00] that there is, there is a difference. I'm not saying that either one is right or wrong necessarily. I, I think what you have to look at is why it was possible to create Dali Lives and why, if it's not possible to do that with somebody else who, who lived contemporaneous with him necessarily, because to build these models of people, you have to have a lot of information about them.

And for, so for Dali lives, because he was such a public figure, they were able to find so many recordings and, and, and things that he wrote. But that's sort of standard of only the, these people who are hyper in the public eye have this much information about them. So this can be done. And whether or not it's ethical to do that, I don't know.

Is it, is it different because he was, has already sort of signed his life to being a public figure, uh, on purpose? I, I, I don't know. So, so with Dali Dali lives, it's. It's a little bit different because he's already in the public eye and you have all this information, but now you have to think about how [00:11:00] all that information, probably more information exists about us in this podcast right now than exists about Dali, like, like the, um, uh, because, and we don't have, and we have don't have any control over it for, for a lot of it.

So I think that the moral questions are really who gets to build  these, thee models and what are they doing and are they kind of already out there? And is it important that they're accurate? There's a school of thought about digital Doppelgangers, which is something that, which is this idea that your data creates another version of you, um, that has no rights.

It can be passed around like, like to different companies it's and it might not even be accurate to you. But that version of you who you don't have control over and who you don't, uh, who might not even be accurate to you is used to make decisions about you.

So in a way, these like AI generated yous are, um, there's a very upfront in your face version of something that you could argue is already happened.

So I think that [00:12:00] it's pretty inseparable from like, who will own and control these, these, these versions of yourself. And, and as the, where does a data come from? And do you have recourse, especially after you passed away there are schools of thought around, um, how like post, post mortem privacy and things like that, um, which I think are really important, but you don't even have a lot of mortem rights to the way your data is used to create versions of you.

So in a way I. I, I think that it's a very interesting technology, but it's also very useful pedagogical tool to make you think about how your data is a part of you. It's fundamental to who you are, and it affects you in ways that might not be obvious, but can, can have huge impacts on your life.

Maraika Lumholdt: Do you think with these privacy concerns that the only the biggest concern is what you were just saying about the individual who these generative AI are based on and how things could be misconstrued misrepresented from who they actually were without their [00:13:00] permission? Or do you think there are also concerns about the living person who is quote unquote, owning that grief bot for example, or another application of it.

Jacob Gursky: Yeah, I, I think, I think so, they're not just called ghost bots.

They're also called grief bots, which I think is even more indicative of why it could be dangerous is I think technology right now has a long proud tradition of being predatory and like to, to people sort of weakest moments. Um, and I, I'm not, I'm not just pulling this out of nowhere. Um, a couple years ago I was on a road trip across the United States and I ended up on Stanford's campus for a day.

And when I'm over on a new campus, I like to just follow groups of students around like, not a, not creepy distance, but just like figure out where people are going to class. And if the class is big enough that I could just sit and, and see a lecture for, for 40 minutes and learn about something I may not have otherwise.

And this one class I sat in was this was the guest speaker had worked for, I think, Instagram and Netflix and maybe even [00:14:00] Facebook, she was cycling through, like what it would take, what, what you need to have in mind, if you're trying to get people to adopt your technology.

And she brought up Maslow's hierarchy of needs, and she pointed to the bottom of the needs, but she pointed to like the base human needs and said, you need to appeal to this. And I like, and I just thought that was wildly dystopic. Like to, to say we, we have all of this technology and, and she wasn't doing it in a way to say, oh, give people help use technology to help people find housing or food or something.

She was saying, no, if you want people, if you want to break into this market, you have to appeal to people on their most base vulnerable level was, was my take. Um, and I think that these grief bots hit people in that space. Losing someone you love is an incredibly vulnerable position and difficult.

Liz Forrey: Wow. So kind of in this realm of everything you just spoke about, including, you know, the privacy and, you know, [00:15:00] protecting people from especially grieving people because of this tech is exploiting people at their basic level of need, you know, or that is an intention. Um, what kind of privacy laws or self-implemented practices do you think would need to be created to avoid some of these privacy issues with the ghost spots and the grief bots and, and other generative AI as such.

Jacob Gursky: Well, the, the first thing that pops into my head are the fair information practice principles. Cuz my most of my degree is memorizing different frameworks for thinking about privacy because nobody has a single definition.

So, um, and then those are it's notice, choice, access, security and enforcement. So you could kind of go through an interesting, not that that's the only framework and man, I hope those are actually the FIPs or otherwise all the privacy engineers are gonna be after me. Um but um, so I don't, maybe we could do like a thought experiment together where it's like, what would each of those things look like for [00:16:00] a technology like this?

So notice would have to happen before the person passed away. Right. Um, and so you'd have to let people know that this is happening. And one of the key things is the ability to withdraw consent, which is how do you do that after you pass away?

A, a lot of the principles are just the basic things that need to happen with any of our data. Um, and with the added difficulty of somebody deceased being involved. So I would, I think that it's, this is very sim--it's not a unique conversation. It's very similar to, what's already being discussed about people's Facebook accounts after they've passed away and, and things like that.

So getting clear notice from the person who this is being built about is, is the first and foremost thing. Yeah. Um, having somebody trusted who can make those decisions after they've passed away, also making it really clear the limitations of this, especially if this becomes something that exists over time, because eventually the people who [00:17:00] are with you to say, no, Jacob would never say that ghost Jacob, you know, or…

Ghost Jacob, why are you trying? Why? I don't remember you loving Coca-Cola so much, won't be around. So eventually the only thing that'll be left of you is, is this, like this version and, and it, and that will inform the memories of people.

One of the key, like rights that people are just like trying to build in the laws right now is the right to, um, fix data. That's inaccurate, like, know what the data is and, and know how to fix it. And, and there are like, Key issues to that when you're dealing with somebody who's deceased. I guess the answer is, I don't really know, but I it's very, it's, it's not that separate from conversations that are already happening and innovations that are already happening in, in the legal space around this.

The CC. The, the CCPA, um, like the California privacy law, uh, has, um, the ability for you to, to give a third party, [00:18:00] the right to, to execute your, your data rights and in various ways. Okay. Um, so, and, and, and including through like, um, power, I don't, I'm not sure power of attorney is the right word, but like in like a, in a legal sense that people can make those decisions.

Um, so that's something that could be considered with these grief bots, but again, you, you run into a, uh, sort of a class issue there where the only people who are able to main, like it takes a, it would take a lot of money to have like a legal team make sure that you're represented well.

The issues, what will happen to this data remain prescient even after the ability to have consent is, is gone. So like you fundamentally can't withdraw consent after you passed away. So, um, I don't know what the way around that. But, um, I think a good thing would be having people store their data maybe more locally.

 It doesn't have to be stored out there in the world somewhere. Um, but [00:19:00] then you're getting into decentralization, which is just like a mess within itself sometimes. So

Liz Forrey: a whole other topic yeah. Right. Um, yeah, I've done some research on data privacy and it's basically like, you have to decentralize, but also yikes.

Wow hard, right? Because it is, it comes to, and I think back to that, it, it, it is death and grief in the grieving process is in the morality of having a version of people exist. It is, you know, very questionable after they've passed away. I have had people in my life who have died, who still have like a Facebook account and their stuff comes up and you're like, get rid of it.

And you know, sometimes we keep voicemails. I know I have a voicemail of my grandma and my mom has a voicemail of her, which is her mother. And, you know, you don't wanna get rid of that, but it's like, that's already sometimes too much what happens when there's more, you know?

Um, and this all ties [00:20:00] back in with how you were saying. And we need art to kind of talk about these things and I think, you know, we talked about Dali Lives and, um, there's other holograms in your research you talked about other dead performing artists, um, who go on tour and like that stuff is common. And so where do you see that these. ghost bots and grief bots and any other name of this type of artificial intelligence, um, that they, where do they intersect with the arts industry specifically now and in the future to kind of close this out.

Jacob Gursky: I think that it's art really has a fundamental way of talking about identity. Like, like we, we have the ability to, to understand that. And so being able to make these technologies accessible to artists, to make these ideas and an understanding of the consequences available to the general public is, is key to this.

Because even if somebody, you went to every middle school, high school, college, and oh, and, and tried [00:21:00] to explain to people, the ins and outs of AI and why, why it's important. That might not be as powerful as like seeing somebody recreated and then maybe seeing something happen that's not great, you know, or, or is having understanding how amazing the technology is, but having it really in front of you, that this Dali's been recreated and maybe that could happen to you and then thinking, wait, I don't, I don't know if I would like that.

Look at all these people sort of just taking selfies with him over and over and over again for the next 30 years that he is in this exhibit or whatever. And it's like, I, I don't know if I would like that. And, um, so I think that

the, the role of art in this is to explore the edges of it and to, um, to show that it can be fun to, to like this isn't all just dystopic, that there are spaces and ways where it's okay to play with these things and, and it can bring joy and wonder and it's.

In it's amazing. Like we live in the future [00:22:00] and, and it's okay to take joy in that sometimes and be aware of the consequences.

So, um, the, one of the guest speakers we had this semester talked about how, like generative a like images, like facial recognition can be used to identify people with masks on like at a protest or something, but it can also be, be used to like, recognize your face and turn you into a cat in, in like, while you're talking to somebody on a zoom call.

And, uh, I think that art is underappreciated and its ability to add to these conversations in a way that really really matters. So my big thought from this is keep bad AI around, especially for art in certain spaces. And just don't instead of saying, Hey, this thing can solve everything. Say this thing is sort of an Alice in Wonderland mirror into ourselves. And so I think generative models, like I would, I would love if I felt like there was a way to like I wanted to…

It was too expensive, but as [00:23:00] far of this project, I wanted to create as many different versions of myself as I could in these bots. Like these ghost bots of me, some of them with like various levels of truth, like I wanted to make up whole life stories about myself that weren't true.

And then instead of like trying to keep myself private, I would release like hundreds of fake Jacob ghost bots about, about the time that I was in Louisiana. And I stumbled onto an alligator in the middle of the road or the time that I went to Alaska and worked with sled dogs, right, and like, like, like one of those stories is actually true but it's like, who would be able to tell if it was like, if it was like from the grief bots?

I think keep bad AI around and use it as a window into ourselves. Don't, don't lose that as, as it becomes cheaper and easier and more accessible. I think that's once you lose that, you lose insight and then you're just a technological solutionist and they, they haven't solved much.

Liz Forrey: Wow. I love that.

Thank you. Thank you. Thank you so much for this.

Maraika Lumholdt: Yes. Thank you.[00:24:00]

Thank you for listening to Tech in the Arts. Be on the lookout for new episodes coming to you very soon. If you found this episode, informative, educational or inspirational, be sure to send this to another arts afficionado in your life.

You can let us know what you thought by visiting our website, amt-lab.org. That's amt-lab.org. Or you can email us at amtlabcmu@gmail.com. You can follow us on Twitter and Instagram @techinthearts. Or on Facebook and LinkedIn at Arts Management and Technology Lab, we'll see you for the next episode.