Summer Lopez on AI and Threats to Free Expression

Ryan Weber: Welcome to 10 Minute Tech Comm. This is Ryan Weber at the University of Alabama in Huntsville, and I am very excited to be participating in another Big Rhetorical Podcast Carnival. This time, the Big Rhetorical Podcast Carnival 3 with the theme Artificial Intelligence: Applications and Trajectories. And I’m very excited to be a part of this group of interesting, exciting, innovative podcasts that are talking about the topic that everyone is talking about, which is artificial intelligence and how it’s going to affect academia, and writing, and the entire world.

Please check out the other podcasts that are participating in the Big Rhetorical Podcast Carnival this year. They’ve got a great group. It’s bigger than ever. And you can find them on the Big Rhetorical Podcast feed, or you can find them with #TBRpodcastcarnival2023, #aiapplications or #aitrajectories. And I want to thank Doctor Charles Woods for inviting me to participate in this year’s carnival.

And I appreciate that the big rhetorical podcast really wants a nuanced, comprehensive investigation of artificial intelligence and how it’s going to affect our world. I hope that this carnival brings both positive and critical views of AI into its conversations, because I think this is a nuanced and complicated topic that is going to have significant long term effects.

For our part on 10 Minute Tech Comm, our conversation skews a little more pessimistic, but I was very excited to talk with this episode’s guest.

Summer Lopez: I’m Summer Lopez. I’m the chief program officer for free expression at PEN America, and we are a nonprofit organization that stands at the intersection of human rights and literature to defend freedom of expression in the US and globally. And I head up our advocacy and policy and research work on freedom of expression.

Ryan: I talked with Summer Lopez about a new PEN America report, “Speech in the Machine: Generative AI’s Implications for Free Expression.” And the report really details a lot of the concerns that arise alongside the explosion of AI as it relates to issues of free expression, censorship, online harassment, bias, and fake news. I found this report very interesting because I have seen a lot of conversation about AI, but I hadn’t seen this topic explored in detail. So I was very excited to talk with Ms. Lopez about the report that they produced, about some of the threats that AI may raise for free expression, some of the ways that we might be able to combat these threats, and her own personal attitudes about AI. I hope you enjoy the conversation.

Begin Interview

Ryan: Welcome, Summer, to the podcast. I’m so happy to have you here. I really appreciate you taking some time to talk about the new report that you helped author for PEN America. And one of the things the report states is that generative AI stands to supercharge existing threats to freedom of expression. Can you talk a little bit about the concerns that kind of motivated you to write this report?

Summer: Absolutely. And thank you so much for having me. I mean, I think that like everybody else, we were, you know, struck by the emergence of ChatGPT and these generative AI tools kind of into the mainstream and the fact that they’re in everybody’s hands now essentially. And, you know, it was clear to us pretty quickly that this was gonna have implications for a lot of issues that we work on, like disinformation, online abuse, censorship, and also for writers and artists and we’re, you know, a literary organization. We represent and work with writers in the US and around the world.

And, you know, this has obviously become an issue in the context of the Hollywood writers and actors strike, you know, and so I think we saw a lot of relevance to our work and also didn’t really see the free expression angle in the conversation very much. And so we really wanted to have a chance to kind of bring that to it and to sort of explore what some of the implications are. I keep saying this report was like building the plane wall flying it because news was just coming out every day and new studies were being released. We were learning as we were going, certainly, indicative of how fast technology is evolving. So we didn’t see it as a paper with all the answers, but we wanted to kind of spotlight some of the issues that we anticipate coming up around free expression.

Ryan: Great. And, you know, that’s one of the reasons I wanted to have you on the show is I haven’t heard people talk a lot about free expression. I’ve read a law lot of stuff about AI. I’ve heard a lot of conversations about AI, and this seemed to be missing from the conversation a little bit. Can you describe this report briefly in case people haven’t read it? So, you know, how many authors it had, kind of what it covers, who the intended audience is?

Summer: Sure. So this was an effort between really kind of 3 of our staff primarily at PEN America with input from lots of our other experts as well on sort of a range of different issues. We developed it over about the last 6 months. And you know, we really tried to look at both, sort of, what are some of the implications of the emergence of generative AI for, particularly for the creative spheres for writers and artists and, you know, what that means in that sector, as well as how it’s going to impact some of the free expression issues that we work on.

And so, we’ve been working on disinformation for a long time, we’ve been working on online abuse. We see both of these things as threats to freedom of expression. You know, we see disinformation as undermining civic trust and our ability to engage in meaningful and fact-based public discourse. And generative AI seems potentially to supercharge some of those problems. We also wanted to look at what it means for issues of censorship and bias and how the systems might, you know, absorb some of those patterns into the information that they’re providing into the world and what that means.

And then we conclude with, some policy recommendations. Again, these are not sort of be all, end all solutions, but some guiding principles and some actions we think, you know, could be taken now to kind of put us on a better footing as we figure out collectively how to deal with these new challenges.

Ryan: Terrific. Well, let’s take kind of each of these things because this kind of lines up with the questions that I had for you. Let’s start with this idea of kind of creating and amplifying disinformation. So we know this is already a huge problem just with humans doing this, but how might AI currently and in the future kind of contribute to making our disinformation problem worse?

Summer: Yeah. So I think that, I mean, first of all, there’s sort of the basic fundamental problem, that generative AI tools can just get information wrong. Right? They can produce incorrect information whether they intend to or not. They don’t have that that sort of motivation, but, you know, I think that can probably be refined over time, but it also probably won’t ever really go away entirely.

I think what’s more concerning is the ways in which these tools make it cheaper and easier and more accessible to develop more sophisticated and convincing disinformation campaigns by those who are intending to deceive the public and that that content then becomes harder to detect because it may kind of lack some of the indicators that we now might, you know, be able to identify a bot, as opposed to a person on Twitter, or on X and, you know, the, some of the things that you kind of look for to be able to tell if something is real or not. As these systems get more sophisticated, they may be able kind of circumvent a lot of those prevention mechanisms that we’ve kind of tried to develop and build.

There’s an example we talk about in the report where a reporter from Pointer used ChatGPT to generate an entire fake news organization in about half an hour. One of the things that we do media literacy workshops and we tell people to look, you know, on websites to look for editorial policies and information about a newsroom’s funders and some of the things that can help you distinguish, you know, real news outlets from fraudulent news outlets. But ChatGPT was able to create all of that, and so it’s going to be much harder to distinguish what’s real and what’s not.

Ryan: They’re doing the things that you want people to look for, they’re providing that.

Summer: Right. Right.

Ryan: Yeah. It’s a little bit of an arms race in some ways is kinda what this feels like.

Summer: Yes. And I think, you know, for us as a free expression organization, we’ve always tried to emphasize solutions to disinformation that don’t themselves infringe on free expression. We’re really about empowering news consumers and information consumers to be more discerning to have the skills to detect false information. And this could make all of that just much, much harder.

You know, I think there’s also some research findings we talk about on the report that suggests that you can, you know, really use these tools to potentially manipulate people’s opinions to skew public discourse, you know, if the chat bots are designed to reflect ideology, they could kind of further entrench some of our existing echo chambers.

You know, I think that the way in which this could be used in our political system in elections, even just then kind of knowledge that it’s possible that it could be being used in that context suggests that people might trust information less. And so, you know, I think that that, you know, kind of risks further eroding some of the civic trust, trust in institutions, and things that we’re already dealing with in our political system and in our society.

Ryan: Yeah. I’ve seen some of those studies. I’m doing a deepfakes research project. And that’s one of the things that people keep finding is that when you tell people about deepfakes, they’re just more suspicious of everything.

Summer: Right. So how do you how do you help people prepare for this without sort of just undermining anybody’s ability to ever trust anything?

Ryan: Right. We’ve gotta have some trustworthy information. One of the other really concerning things that you talk about is the ability of AI to make online abuse, which is already very, very bad in a lot of places, even worse. So how might that happen?

Summer: So again, there’s sort of a couple aspects to it, and one is obviously that it can just, again, add more capacity and could ramp up the volume of abuse that can be easily generated. You know, a lot of the abuse that we see, especially for writers and journalists and dissidents around the world is being state generate, right, or by kind of state affiliated troll armies. And so, you know, being able to carry out a lot of that through generative AI tools could just make it a lot easier, a lot faster, to really overwhelm people with abusive content.

You know, at the same time, I think a lot of what those campaigns often try to do is to generate false information about people out in the world. And so there’s also the risk that, you know, that that information then feeds the generative AI tools and then is just self-replicating and makes it harder for people to kind of, escape that cloud of, you know, negative information that might around them, maybe there are claims, you know, made up about them that are then, you know, again, kind of replicating.

So I think, you know, that’s part of what we’re concerned about and part of what we’re hoping, you know, some of the generative AI companies will be thinking about, you know, how they can put in place some safeguards so that these be weaponized in that way.

Ryan: That was one of the things that I thought was interesting in the report because, you know, obviously, sort of immediately the idea comes to mind of, you know, someone’s gonna make a million mean bots and, you know, be jerks to people on Twitter. But I hadn’t thought of sort of like, let’s just create a ton of false information and put it out into the world about these people that just, you know, makes it much harder to combat, like, the speed at which this stuff can be produced.

Summer: Right. Exactly.

Ryan: Alright. So we’ve got two dire things already. Let’s continue. So you were talking, you mentioned this earlier about the issues of censorship and bias, which is, you know, something that people have talked a lot about, especially bias with the AI. But what kinds of ways do you see for AI to have the potential to reproduce patterns of censorship and bias?

Summer: Yeah. I mean, obviously, you know, we know that there is bias built into everything that, you know, involves a human-made algorithm of some kind. Part of the problem is that, of course, these tools are reflecting what’s already out there in terms of content on the internet, and so it really depends on what bodies of content they’re being trained on, but they can certainly reproduce patterns of either unconscious bias or deliberate censorship.

So, you know, if you are training a chatbot on a corpus of information that is already censored by the government or that, you know, we use the example of not mentioning Tiananmen Square, the Tiananmen Square massacre, then, you know, then that will be reflected in the context that is that comes out of it as well. So I think a lot of this is really going to depend on how that evolves and what information they are being trained on and some of that we know and some of that don’t know at this point.

Again, we talk about an example in the report where, you know, a reporter gets a different response when she’s working with a Chinese-built chatbot in English or in Chinese, probably because they are trained on different corpuses of information or because they have more information or less information that are working from. And so, you know, and when it was in English, she was able to get it to talk about government suppression with regards to Tiananmen Square.

And so that suggests I think it’s interesting because they’re definitely going to, in some ways, you know, build in some of the censorship into the system. On the other hand, they are also operating, you know, they have the capacity to hallucinate. And so they’re hard to control too. Right? So it’s also possible that they might evade some of the censorship attempts of governments. And I think part of what’s interesting at this point is kind of seeing how that evolves and what that’s really going to look like in practice.

Ryan: Mhmm. It is it’s all of this is going to be wild. I mean, it’s gonna be very, as you said, you know, this report is building the airplane while it’s flying, and I think that’s gonna be AI conversations for the next several years, at least. But in the midst of this evolving situation, your report offers several recommendations for policy and that kind of thing to, as you I liked your phrasing earlier, “to put us on a better footing as we move forward.” Can you highlight a few of these recommendations that you think are sort particularly significant?

Summer: Sure. And I think, you know, as I said, one of the things that we think about in terms of how we respond to generative AI, similar to how we thought about responding to disinformation, is there’s always the risk that the response can also be problematic, right, that governments can resort to censorious new regulation, especially in kind of a moment of everybody panicking about this new technology.

So, you know, we don’t wanna be too doomsday-ish about it, and we don’t want to suggest that we kind of rush to regulate this in ways that could constrain expression because they are also tools for expression and for creative production. And we don’t to, you know, kind of go too far down a problematic path.

So, you know, we really tried to offer some kind of guiding principles for how to think about some of this in terms of ensuring that as we think about any sort of regulation or guidelines, whether it’s by industry or by government, that you know, civil society is part of that conversation, that human rights advocates and academics and writers and artists and people whose livelihoods can be affected the development of generative AI are part of that conversation, that these efforts are iterative and, you know, allow for flexibility, again, because the technology is changing.

So quickly, you don’t wanna kind of get, you know, stuck in something that quickly becomes insufficient for dealing with the challenges as they evolve. I think one of the things we’ve seen with social media is that it’s been really hard for researchers to have any access to how it works and sort of understand how it could be better. So I think, you know, ensuring transparency and researcher access, as much as possible, would be really important.

And then, you know, we obviously have a recommendation in there about sort of the importance of figuring out how to safeguard the ownership rights of writers and artists and other content owners so that this is not, you know, either infringing on copyright or really just affecting people’s livelihoods and ability to own their work.

I think the one other thing we talk about is that there’s some kind of stuff already out there that we could take action on that would help. So, you know, it’s been long overdue to really pass some sort of comprehensive privacy legislation, and there is also the Platform Accountability and Transparency Act in Congress that, you know, is there and would help.

Once again, it’s really more about, it wasn’t sort of designed with generative AI in mind, it’s really about social media platforms, but those kind of basic principles of establishing accountability, transparency, prioritizing privacy, I think, would, again, set us on a better footing for how we deal with generative AI as well.

Ryan: Great. So you’re saying there are some existing things that we just haven’t gotten around to that have become more urgent with the explosion of AI.

Summer: Yes. Exactly. And I think, you know, we should, obviously, it functions quite differently than social media, but I also think there’s an opportunity to try to learn some of the lessons from how we haven’t sufficiently responded to the challenges of social media and think about, you know, how we can be a little bit more proactive, but also really thoughtful and, you know, not sort of put anything in place that is going to constrain expression, but think about how we can make these tools, you know, maximize their value for everybody and reduce the risks involved.

Ryan: Terrific. So you spent, you said several months working on this with the team. You have immersed yourself in the stories and the literature about it. If I can ask this and if it’s fair, how do you feel now? Like, what is your level of optimism versus pessimism or kind of what is your attitude about AI today, at least. It’s, August 10th 2023. I won’t pin you down forever.

Summer: That’s good. Yes.

Ryan: Kind of where do you sit on how you’re feeling about all of this?

Summer: I mean, as I said, we didn’t want this to be all about sort of it’s the end of humanity as we know it or anything like that. You know, we wanted to take a measured approach and recognize that these are creative tools that they could support the work writers and artists that they could, you know, be useful to journalists, that they have the potential to also counter some of the problems that we’re concerned about. You know, they could be developed to address online abuse and censorship. But also I am pretty concerned. (laughs).

I think there is obviously, you know, if these all put in the hands of well-intentioned people with human rights at the center of everything they’re doing, that would probably be okay, but we know that they will be weaponized and wielded by authoritarian states, by, you know, bad actors who are trying to shut down other people’s speech and voices. And, you know, I really am concerned about, you know, the ability to kind of manipulate our public discourse to undermine, as I said, civic trust and really the value of speech and the written word.

And so, you know, I think that, we wrote our first report on disinformation in 2017, and we kind of said all these things in there that sound, you know, we said we’re sort of far-fetched sounding risk at the time. And now they’re all obviously exactly what has happened, right, reduced trust in the news media, reduced trust in institutions, challenges to our political system. And, you know, I see sort of the same set of risks here and how they could be amplified. And so I, I do worry about sort of the, the difficulty in sort of having certainty about authenticity in communications and how we talk to one another as humans too.

And so, you know, I think there’s sort of some intangible concerns here that, again, we don’t know exactly how they will play out, but I think, you know, I think we have to be really thoughtful about how we respond and how we preserve the value of language and word and the sort of spark of human creativity that, you know, I don’t think these machines can replicate. I’m sure they can churn out endless stories or texts, but I don’t think there’s anything that’s going to quite replace the, you know, sort of creative genius that’s possible in a lot of, of, literature and and art that we enjoy today. So I think preserving that, that space is gonna be really essential.

Ryan: Great. Well, thanks. So it sounds like a not total pessimism, but some pessimism.

Summer: Definitely some pessimism. (laughs).

Ryan: Yeah. Yeah. Depends on what day you ask me. I that’s sometimes my attitude as well.

Summer: So I always say you have to work in human rights, you kind of have to be an eternal optimist anyway. So I think that that, you know, I I keep that in mind, and I do think there are, you know, there are things that that we can do to consider how we manage these tools and be responsible, but also, you know, not constricting ways.

Ryan: Well, thank you so much. I really enjoyed talking with you and thank you for the report, and all the work that you’re doing and keep it up.

Summer: Thank you so much, Ryan. Thank you for having me.

Join the discussion

Subscribe

Episode 6