Let's Talk: What Are You Giving Permission For?

How is your data being used? And what policies are in place to protect you? In this Let’s Talk episode of the Tech in the Arts Podcast, AMT Lab’s Executive Director and Publisher, Dr. Brett Ashley Crawford, and Chief Editor of Research, Hannah Brainard, dive into the latest headlines.

Show Notes

Google Cookies are Alive and Well

Global AI Treaty

Who is listening to you? Cox Media Group Uses “Active Listening”

Copyright Office Report on Digital Replicas

California AI Legislation 


Transcript

Dr. Brett Crawford

Welcome to the tech in the Arts podcast. It's the podcast of the Arts Management and Technology Lab at Carnegie Mellon University. We connect you to the innovation happening in the field, and today we're doing a Let's Talk episode where we chat about what we're seeing in the news that sparks our curiosity or any other various emotions. My name is Brett Ashley Crawford, and I am the Executive Director and Publisher here at AMT Lab, and I'm joined by our Chief Editor, Hannah Brainard. Today, we're discussing the very unhealthy, highly codependent relationship between advertising and technology. And because it's, you know, 2024, we're going to talk about AI, specifically emerging policies, trying to address the current and future impacts of AI in society. So, I did say that advertising, maybe it's the advertisers that have developed a troubling codependent relationship with our data. Hannah, will you tell us a little bit more about what we've been hearing in the news? Because I had to chuckle when I heard of Google's sudden shift that they're not actually going to kill cookies. I chuckled because I always thought, well, how are you going to make money [without] the cookies? So, what's going on over there?


Hannah Brainard

Yeah, sudden, but not very surprising in this case. Just to give a little bit of background for any listeners who might not be super familiar with what cookies are or why they're so important for advertising. They're small groups of data stored in your browser used to track your behavior online. 

So, first party cookies are those that are generally more useful for functionality within a website, but the third-party cookies are really the ones that we're talking about here, the ones that track your behavior across websites. And these are so valuable because they can form this really comprehensive report about your individual behavior online. That's so useful in targeting you for advertising, but also could have any number of nefarious uses, quite frankly. 

Obviously, there are reasons that consumers would be concerned about the types of data that's being collected and what it's being used for. Back in January of 2020, I believe, was the first time that Google said that they were going to eliminate cookies to protect user data privacy.

That deadline, they said at that time, I think it would be within two years that it would be over. I think that timelines shifted at least three times, most recently in April, saying it would be done by the end of January 2025. And again, for context, they're some of the latest to this game. Safari and Firefox all have made major changes to shift away from cookies. But again, it's one of their biggest, likely the biggest, income stream. So, it's hard to part from. 

January of this year, they started to roll out an alternative for 1 percent of Google Chrome users. That's the topics So it's focused on lumping user information into these little boxes that could help advertisers target them for a certain purpose. There were a lot of skeptics about how useful this would be and how it would impact the bottom line.


Dr. Brett Crawford

That'd be me. I'm the skeptic. But I’m not the “I want to advertise”, but the “how are they going to do that” [skeptic].


Hannah Brainard

Yeah, unsurprisingly, that was not successful, and in July they announced that they were no longer going to be eliminating cookies. Interestingly though, also tied to antitrust, that this new method kind of made advertisers more reliant on Google versus other options. So yeah, interesting, not surprising.


Dr. Brett Crawford

And it will be interesting to see where the antitrust cases end up placing the fate of the cookie. Yes?


Hannah Brainard

Yeah. I think now they're shifting more toward transparency. So, emphasizing how users can turn off cookies on their own rather than doing it at a macro level. Interesting to see where that goes and really it's all centered around like our data and how it's being used. That's just one case of it. I've heard some rumors, about another case. Brett, would you want to tell us a little more about that?


Dr. Brett Crawford

Oh, I would love to. So, you know, we definitely joke a lot in the classes that I teach about how we're being tracked and followed, up by our data, etc. But I often will actually talk at my phone and speak to somebody as though they're listening because I'm aware that our microphones are being given over to companies when we click those lovely permission and privacy rights as we're accepting an app. So yes, a Newsweek article was shared out that there was a leaked pitch deck from this company called CMG Local Solutions, and it's a subsidiary of Cox media group. It essentially framed a practice that it has active listening. It's essentially a way to combine your voice data with that online behavior data that you were just talking about to give you hyper targeted advertising. I want to say that this is leaked data, but it's nothing new for those of us who sort of paid attention to the fact that you might mention something like, I'm really really hungry for chocolate and suddenly everything on your sidebar for Facebook or other advertising fees are suddenly chocolates appearing everywhere.

Anyway, the deck itself, that was obtained by 404 Media says "advertisers compare this voice data with behavior data to target in market consumers" end quote. It goes on to say that the technology can take ready-to-buy customers essentially, and take what your spoken intentions are and deliver it to a company like Google or Amazon so they can really lift what you're wanting into your feeds, right? Meta, Amazon, and Google are all supposedly, according to this pitch deck, major partners to this program. What we've heard since then is that Meta is now investigating CMG's activities. Amazon claims that they don't know them. And Google has said they've cut them off as a partner, so they're not going to use them anymore. But again, sort of what you said earlier, you could always find safer ways of doing things.

So don't use your phone and talk into the microphone. Be sure to read your apps, permission slips go into your settings for your phone. You can actually turn off the microphone or do it just while you're using the app. And remember to close your app. You know, I have friends that I go through and I'll swipe close on my phone and they're like, why do you do that? You have all that memory. I'm like, well, because it's actively doing things in the background and it will affect my battery, but it also might affect other things. I don't know how, you know, everybody is different, what I would call like phone safety, or I don't know, what your practices are. What are some of the things you do?


Hannah Brainard

I am feeling more and more that I'm not cautious enough. Yeah, I mean I'm gonna start closing nap unless there's something I really want for Christmas this year. 


Dr. Brett Crawford

Well, there you go. Start talking about discounts. That's the key. Talk about the discounts you want on that item.


Hannah Brainard

Right.


Dr. Brett Crawford: Yeah, and I have heard some of the people who are more in the Gen Z zone feeling very comfortable with all this, that they're fine with people listening in on their conversation.

They know that that's happening and that they're fine because they can discern everything that's sort of popping up in their feed and they'll be able to figure it all out.

I've probably have been teaching marketing far too long, and I'm so aware of the psychology of our brains and how the subliminal messaging can impact us.


Hannah Brainard

Right.


Dr. Brett Crawford

Right. And so, I'll be curious where all this rolls out because I think the antitrust issues are very easy to track and some of these other opportunities or infringements on our privacy are a little bit harder. So, we'll see what actually happens.


Hannah Brainard:

Harder and harder to distinguish too. 


Dr. Brett Crawford

Yeah. No. I mean, have you heard any, I think one of the things that was interesting, you said you hadn't even heard that much about this whole active listening scandal is that true?


Hannah Brainard

I mean, honestly, what you said at the beginning, just that it's sort of this rumor, that it's listening and you kind of see it. You're like, I talked about this and now it's on my phone, but that can't be real. There has to be policy in place to protect us from something like that. That's absurd. Which is, you know, not really true.


Dr. Brett Crawford

Sadly, there is not a lot, at least in the United States. I think there's much more rigorous policy in other countries and definitely in California. I know we're going to talk a little bit about that later, but it does really relate back to this, any sort of, we’re lack of rigorous policy addressing our rights and then really any emerging AI. And I say emerging and I'm going to say the word like artificial intelligence, knowing that it's not emerging. We've had it for a while but it is something that I think we're woefully behind on. 

However, the good news is it does seem to be there is some movement, globally least. The U. S. signed a treaty with several other U. K. policymakers and countries to essentially set up what's called quote “the framework convention on artificial intelligence” end quote. And that framework is set up as a treaty. And so, you think about “treaties” and you often think about wars, which is a language I would say our listeners and myself, I keep in the back, that we're using language of war while we're talking about a technology, which I find very interesting. But essentially, they are really doing the framework for a preventative to a type of war action. Because it's about human rights; It's about maintaining democracy in a world where AI could have some impact; And maintaining the rule of law, what is appropriate in a digital space would be perhaps interpreted as appropriate in a real-life space. So, it's really focusing on ensuring that the activities within the life cycle of artificial intelligence systems are consistent with all those values that their human rights are being maintained, that democracy is not being threatened, and that that rule of law is being conducted appropriately while technology is being developed, which we know there have been problems with.

There are also lawsuits around other companies than Google around how they've collected their data for training for their AI. Some of the training camps that are used for visual training for AI, things like that. So, one of the questions that I've often been asked when I've mentioned this in company has been, well, what are they going to do?

It's like, well, there's no like really like a way to actually like enforce it, but it's a treaty. And so, they're trying to really come up with some procedural models for how countries can work with each other as sort of big picture level AI. So, it's less about necessarily how you and I are playing with it. It's really about how the global players are engaging with artificial intelligence and how the companies and partners with these countries are engaging, using and developing artificial intelligence and then deploying it for the use of us, that we're going to use, or other companies will use. So, it's a good start. 

There are a lot of people that did not sign the treaty, but they're very welcome to sign it. It's very much sort of the United Kingdom, EU, U. S., I would call it the standard World War II bunch without a lot of the Asian, South American, Middle Eastern, African representatives in the mix yet. But hopefully they will join and hopefully they'll become more of a global picture on this. Have you heard any scuttle about it in your field, in your areas?


Hannah Brainard

You know, that actually makes me think of a recent study from the copyright office. So, going from this global lens to more of a domestic lens of how this can actually take action within our laws and policies in the United States. In 2023, the copyright office they would be beginning a study investigating the intersections of artificial intelligence and copyright.

You know, so many different ways. You talked about, sourcing of materials, but this first part that they just released in August is centered around deep fakes and digital replicas. So, talking about, like impact on democracy. We've seen pretty serious headlines over the last year.

I mean, not just like, Frank Sinatra and artists and things like that, but, you know, political leaders, president Joe Biden, for example, that really have a kind of broad, threatening impacts, potentially.


Dr. Brett Crawford

Yeah. If you're creating deep fakes of political leaders, this is going to have a threat on democracy and also can, as a result, end up with individual harms, right? 


Hannah Brainard

Mm hmm. Oh my gosh, yeah. I mean, even in headlines of people imitating your loved ones calling you for scam calls. It's, at every scale. Some are our biggest political leaders, but people you may know that could be impacted by how simple it is to replicate the human voice or likeness. Using tools through A. I. So, the purpose of the first part of this study was really to provide Congress with some recommendations on potential federal law and just to highlight a few key points that they brought up at the at this study, and I would kind of invite everybody to jump into it a little bit more. It's sort of Interesting to read what they processed.

Dr. Brett Crawford

And we will have the links on all these lovely pieces of news on our show notes. Thank you reminding people to check that out.

Hannah Brainard

That's perfect. So, like we said, beyond just celebrities or public figures, this is recommended to be protection for all people. And it goes beyond just the current, face name likeness protections that exist in many states now. They also suggest that this lasts at least for an individual's lifetime, and in some cases beyond that.

And, you know, this is important too that it's liability not just for the people creating this fake content, but for the platforms that are distributing it. So this also creates incentive for social media companies, etc. to monitor for this and remove it promptly. But also, there's like recognition that this could be used.

And I think this is kind of interesting through an arts and cultural lens as well, that there might be opportunity to monetize by representing one's likeness using AI in different ways. So, being able to have an individual license their likeness, if they want. 

Dr. Brett Crawford

Yeah, and we saw a little bit of that discussion with the negotiations of the SAG AFTRA contracts. There have been actual contracts that have moved forward with gaming companies and, um, some performers whose likeness is being used for avatars and other gaming profiles. So, it's a sign that we may find a future where we actually license our image, but also maybe license our data.


Hannah Brainard

Mm hmm. Yeah, I mean, the line I hear over and over again is if you're not paying for it, you're probably paying for it with your data, so. 


Dr. Brett Crawford

I think there'll be a moment again, sort of the black mirror extension of what could that be? And I do think there will be a how, if we're requiring a company in the future, let's say, to pay an artist who has copy written a book to let it be scanned to go into an artificial intelligence. What about those of us for whom, all the data on Facebook and Instagram was wiped to train AI? Is there a compensation model for that? Right? Or is there an opt in opt out model for that? That it would require some form of free something, compensation and actual cash. It'll be interesting to see where this continues to move.


Hannah Brainard

Yeah, it's really interesting to sort of picture that future. And well, we're on this train of policy zooming in even further from, you know, we've gone global national to a state level. One of the most impactful states in the conversation around AI is California. So many of our largest artificial intelligence companies are housed there.

A bill that has been making the headlines this month. California SB 1047, recently passed through the state assembly and is now on the desk of Governor Gavin Newsom for approval or veto by the end of September. So, it's kind of a landmark bill in a lot of ways. It's one of the most strict that we've seen in the United States around artificial intelligence in its first iteration had, you know, thorough safety testing, third party evaluations, reporting to a newly established regulatory agency, and even threat of perjury charges for developers who didn't report safety concerns or lie about it. Which I see you, uh, kind of 


Dr. Brett Crawford

I'm making faces. Well, and also with, with maybe a little bit of glee. Accountability you know,


Hannah Brainard

Well, and I think. It was interestingly met with a lot of mixed reviews, you know, they think it's this conversation of regulation and ensuring safety without prohibiting innovation and technology. So, there was, yeah, a lot of mixed response and I think specifically how it would impact small developers or startups, academic settings and things like that, this kind of heavy regulation.

So, went through some changes, unfortunately, in my opinion, removed the regulatory agency and perjury charges in favor of focusing on releasing public statements about safety. It also removed these restrictions for developers with budgets under 10 million dollars. So, it wouldn't impact startups, academic spaces, et cetera. So, it's interesting.


Dr. Brett Crawford

It is interesting. I had a couple of things I wanted to sort of tack in and get your thoughts on. One is you can actually, on some of the bigger models that are publicly accessible, look at their change logs, essentially. So, you can see everything that open AI did from version three to get to four to get to where we are now, so to speak, right?

So, they say everything they've done and including like the massive testing in the most recent one for preventing harms, right? How can you, if something's thinking on its own, how can it not do this? But in addition to that, I think what's interesting is I've heard some conversations in the rabbit hole worlds in which I live, that there's a recommendation for there actually to be a department level, like the department of defense, that there would be a department of like essentially technology,


Hannah Brainard

Interesting.


Dr. Brett Crawford

which would move right now technology sort of split between the FTC, which is managing some elements of the [AI], and then of course you have the Department of Commerce, which is handling another part of it and to really say, no, technology is this bigger piece. There are a lot of conversations happening. But the good news is you are getting some transparency for those who want to read all of the transactions that have happened for changing in artificial intelligence.


Hannah Brainard

That's really interesting. And I, I think I know maybe where we're sitting now, there's not the skill set in our legislative bodies as in the tech companies creating these changes. So the idea of having an agency specifically focused on technology would be sort of, I don't know, I'm supportive.


Dr. Brett Crawford

And part of me, I know that we actually have in our federal government, a vast team of people who are working on cybersecurity for us. Right. And there are other technology teams that are doing other examinations. The FBI has an entire tech team. So, I think we have the skillsets. We just haven't centralized the conversation, which might help their opportunities and costs of centralization. But I think it, by doing that, it could create an interesting on the bigger concerns and not leave it for the president to sign the treaty, but to actually have sort of a, an officer or somebody who's actually engaged in some of those conversations.


Hannah Brainard: Well, and it'll be interesting to see what happens following this California bill. Should it be signed into law? How that will enact or how that will impact other states. Actually, this is one of 12 bills that, or maybe more than 12 on Gavin Newsom's desk right now regarding AI.


Dr. Brett Crawford

All regarding AI?


Hannah Brainard

Um, which is, yeah.


Dr. Brett Crawford

I wonder why this one got that. I wonder, or maybe I'm making assumptions, why this one got the most attention by the media 


Hannah Brainard

I think just from my understanding, I think it was probably the most intense like with this strictest regulation. Just another one as an example was more centered around watermarks. So, requiring watermarks on social media platforms, et cetera, that would be easily enough perceived read understood by the average user so they can identify AI generated content if they come across it in their feed. That one was actually much less divisive, like open AI, Microsoft, Adobe, all in support of that bill. So that might also contribute to why it was maybe not as, you know, popular in the headlines,


Dr. Brett Crawford

And probably because we work in the arts, I feel like I'm more knowledgeable about the watermarks, particularly for artists and sort of discerning what was made by an artist versus what is made by AI. And not that I wasn't paying attention to this, but I didn't know that they'd made some of key changes to it over time. So that's why we have these conversations. Anything that you are hopeful for after you've read these policies and sort of been paying attention to the news?


Hannah Brainard

I don't know. I mean, I think I'm feeling the momentum from the perspective of an individual user on all of these different platforms that there's more attention to how my data is being used and I'm getting a little bit more agency and controlling who has access, but slowly.


Dr. Brett Crawford

What would be the number one piece of advice you would give to a friend for protecting their data?


Hannah Brainard

Well, now I'm going to say close all your apps. But I think just in your settings for all of these things, like know what you're giving permission to, like read the fine print, because you might be giving away a lot more than you think you are.


Dr. Brett Crawford

Yeah. And they are getting a lot easier to track, like who are they sharing the information with? Right. The word “partners” is that key word that they all say, and we'll share it with partners, but you don't know who those are. I think my main piece of advice is to encourage people to use a browser like Firefox, which is an open-source tool, but there are many of them out there as you said. I mean, Safari, Opera, many of them have higher privacy settings that are just sort of baked in. You don't have to do anything. And then, you know, I use the search tool DuckDuckGo, and none of these are underwriting anything that we're doing. So. But it also has a no cookie model. It just sort of shuts everything down. So, it depends on, I mean, again, clearly there's some people who love the fact that they just have things delivered into their advertising stream and it's there when they're ready for it. 


Hannah Brainard

That is, that's true. Well, I feel like I've learned a lot from this conversation. This has been very fun.


Dr. Brett Crawford

Same here. It's part of why I love working on the arts management technology lab. I feel like it keeps me up to date and informed and ready to tackle whatever the next challenge is going to be. So, thanks for joining.


Hannah Brainard

Thanks so much.