Dr. Kem-Laurin Lubin on the Rhetorical Nature of AI

Ryan Weber: Welcome to 10-Minute Tech Comm. This is Ryan Weber from the University of Alabama in Huntsville, and today I’m presenting a special episode as part of the 2025 The Big Rhetorical Podcast Carnival, which includes a collection of podcasts in writing studies focused around a central theme. This year’s theme is (Un)tethering Surveillance, Power Dynamics, Emerging Technologies, Social Control. And the theme asks us to consider how new technologies like AI and large-language models exert power and agency in our lives, and how we might design more human-centered technologies for a just future. 

My guest speaks directly to these themes. Dr. Kim Lauren Lubin is a designer, scholar, and writer who explores the rhetorical nature of AI, the ways AI tools shape human identity, and how AI enacts surveillance, especially in medical technologies like fertility apps.

I first encountered Dr. Lubin’s work on Medium, where she writes thoughtful and accessible essays on rhetoric and technology. In this interview, we talk about how AI exerts tremendous agency in our lives, collecting our data, and then defining people as normal or irregular in ways that are particularly dangerous for marginalized communities. Dr. Lubin encourages us to forefront humanity in designs that are inclusive, and give users agency to understand or resist how their data gets used.

I want to thank Dr. Charles Woods for inviting me to participate in the 2025 The Big Rhetorical Podcast Carnival, and I encourage you to check out all the great podcasts that have participated this year. I’ll let Dr. Lubin introduce herself, and then we’ll move straight to the interview. 

Kem-Laurin Lubin: My name is Kem-Lauren Lubin. You can say that I’m a design researcher turned AI scholar, and a newly minted doctor of philosophy and computational rhetoric here in Canada at the University of Waterloo. My work basically examines how artificial intelligence really encodes power, especially in my specific focus, and that is health and surveillance. And yeah, at the heart of my work, I really just want to make sure that design is ethical.

Ryan: Awesome. And that’s why I invited you on the podcast, because I really like the stuff that you’ve written about AI. And you know, one of the things I like about your stuff on Medium is that it’s smart, but it’s accessible. You know, people who aren’t in rhetoric can still get something out of it. And one of the things that I really wanted to ask you about is, you argue that AI itself is rhetorical. For you, what does it mean for AI to be rhetorical? 

Kem-Laurin: Yeah, so thank you for reading me on Medium as well, too. And I think before saying anything, I really try to make my writing accessible and not just academic, because it’s part of the mobilization process of research too, right? But when I say AI is rhetorical, I think what I’m trying to say is that it doesn’t just process our information, right? It is at the same time shaping meaning about who we are, who it thinks we are. It’s a kind of an interpretation of the world. And in its own way, it’s telling us and prioritizing what it thinks matters.

And so we are the product, right? And so in my research, for example, if you think about fertility or menstrual tracking apps, or for that matter, even software surveillance, when an app tells you something like, oh, your cycle is irregular, right? It is using a kind of data about you and making a claim about your body about what it considers normal, given what is actually framed as normal in its weird, virtuous AI systems, right? So in that case, that’s what I mean when I say it’s doing something rhetorical in that sense. It is not neutral at all. It is shaping meaning and helping frame who it thinks we are.

Ryan: And that actually, I think your example actually really lends itself well to my next question, because one of the things you argued in the Medium piece that sort of drew me to your work in the first place is that AI systems are an active participant in crafting our identities. And I think, you know, the example that you gave there of like, well, your cycle is abnormal. Well, that is defining the person, right? By using AI to define the person. Can you talk a little bit about how AI systems participate in the work of creating our human identities? 

Kem-Laurin: Absolutely, absolutely. So I just want to like briefly insert sort of a footnote to my research. I really come from a field of rhetoric, and I really wanted to use the language of rhetoric and the history of rhetoric to tell this story. So in ancient Greece, for example, when we think about ethos and Aristotle, I have made in my research to claim that today we’re doing it algorithmically, right? 

And so just going back from the last example is that healthcare apps, they label us, for example, a fertility app may tell a woman, oh, you’re stressed, or you’re not ovulating or something like that, right? And those labels really influence what we do in real life, right? And so just thinking of the funny example of an AI, even telling an AI, of course, powered app within the context of fertility app saying, well, you need to have sex today, right? And so, you know, you think of all of that as, you know, guiding the process of healthcare, that these are, you’re in conversation with the system. And that itself, you know, lends itself to being viewed through the lens of rhetoric as well too. 

Ryan: Well, and I imagine this has got to be, and I think we all can think of examples in our lives, you know, you didn’t get enough steps today on your watch or whatever, but it’s got to be a stressful process to kind of have the AI telling you, you know, there’s something wrong with you, or here’s the way you need to behave. And so it seems like that’s a little bit of what you’re getting at is these systems are dictating our identities and behaviours. 

Kem-Laurin: Absolutely. And just to give you two tangible examples, I teach a course at the University of Waterloo called Gender and Social Justice in Popular Culture, and a big subset of my class, you know, students with disabilities. The one example that we discussed in class was sort of the default setting of AI-powered data design systems when it tells you you need to do your 10,000 steps. If you’re in a wheelchair, the assumption is that you as a user can do 10,000 steps. And it does this in so many contexts, right? And so that’s what we’re talking about when we think about, you know, why this matters for us to actually look at and interrogate as something that is a feature of our current technoculture.

Ryan: Because the AI is defining what’s normal, it’s defining what’s irregular, it’s defining kind of who is normal and who is not based on, you know, sort of synthesising all this data. 

Kem-Laurin: That’s right, that’s right. I was thinking about this as well in the context of my own research in surveillance systems, right? And one of the things that is built into these systems is that it tells us in its own AI virtual signalling that these people, these dark bodies, for example, are dangerous. And there’s so many other ways. It is very difficult when I think about my research to not think about the fact that as we then begin to interrogate these systems, we need to also understand the context in which it is actually functioning in that rhetorical way as well. 

Ryan: So we’ve got to bring the rhetorical back in because it’s dangerous to assume that these algorithms are just giving us objective information.

Kem-Laurin: Yes, yes. There is, interestingly, Ryan, a case right now with LinkedIn. It’s very viral in the last week, the whole idea of, you know, the gender switching experiments and what happens. And this also gives us insight into the fact that these algorithms are doing something because when, I guess Lucy was her name in this case, when she became a Luke for 24 hours, you began to see the algorithms working differently. And this was also a case that was done, I want to say in April of this year, with a Black woman changing race within the context of LinkedIn. And so when I talk about the rhetorical nature of these AIs and how they’re computing us, mathematizing us, there is rhetoric in that numeric language system as well too.

Ryan: And to understand, because I did, I read a piece you wrote about the LinkedIn experiment. So to make sure I understand, if a woman changes her gender to male on LinkedIn, it expands her reach. If a person, especially a woman, changes their race to white, it expands their reach. Is that accurate? 

Kem-Laurin: Absolutely. And it’s not, we don’t even stop there because one of the things we have to be cognizant of is that we have many identities. So think about the old, after 9/11, when, you know, Mohammed Ahmed became Andy Joseph, right? And so all of these things come to play, you know, with my disability students as well.

So we started talking about “What does my profile picture look like? And what is the system capturing about me?” If I am, I’m using air quotes “fat,” what does the system convey about me? If I look to be 60, what is the system, you know, getting about me? If I live with X postal code, zip code, all of these things. And so in my research, I see these as the units that actually give me indications as to how the computation is occurring. And in many ways, the system may not be able to say you’re Black, but it can say you live with X postal code, X zip code. And so that is the kind of rhetoric that becomes so insidious to me, right, in these AI-powered systems. 

Ryan: Because it’s looking at all this data and crafting a picture of you. And you know, one of the things I really liked in your medium piece, going back to sort of your stuff about, you know, rhetoric and ancient rhetoric, you offer us the term “ethotic heuristics.” And can you tell us a little bit about what this term means to you? 

Kem-Laurin: Yeah. So I build on, I’m a designer, as I know, as I mentioned earlier. So I build on the legacy of heuristics as being the sort of rule of thumb. Well, there were 10 when I started. Rule of thumb guidance to how we design systems. And so because that was already something that existed for designers as an artifact, as a tool of the trade, I wanted to build on that.

And that’s the work of Molich and Nielsen. And so I take them into a modern era to say that as we begin to build systems that are characterizing people, hence the ethos part, right? How do we actually do that in a responsible way? So one of the things I’ve always said in my work is that these heuristics, they’re contextual. So I want to situate what do they look like in the case of fertility apps? 

For example, the criticality of such an example is in the reversal of Roe v. Wade. Imagine all of these apps with information about a user that could be deemed illegal, right? How am I, as a user, allowed to interact with the system to get my consent back? And so these ethotic heuristics, I said, are really contextually based, and they’re really helping us mitigate potential harms that a system can do with rules of thumbs, right? And so my new book actually goes more deeper into that, specifically for women’s applications, and they become categorical heuristics. Things like, you know, transparency, consent, privacy by design, affordances, user controls, human factors and ergonomics. These are all categorical heuristics that, if given a context, will produce different sets of heuristics appropriate to said context. I hope that wasn’t a lot. 

Ryan: No, that makes sense. So you’ve taken like the, what, Molich and Nielsen, the 10 usability heuristics, which my students use a lot for heuristic evaluation. You’re saying let’s add some ethotic heuristics, some sort of contextual ethical heuristics that we can also use to evaluate designs and systems. 

Kem-Laurin: That’s correct, yes. So it’s really building on that tradition. I wouldn’t say I’ve used these heuristics, but I have taken the concept of heuristics into that practice, and I also apply them to surveillance systems as well, too. But I’m looking at it from the perspective of what is the rhetoric that enables governments and whoever to surveil us? They’re telling us, “Oh, it’s for your safety.” So I also apply another set of categorical heuristics that speaks back to surveillance rhetoric.

Ryan: Okay, can you talk a little more about the heuristics that speak back to that surveillance? Because I know that surveillance is one of the themes that runs throughout your work. 

Kem-Laurin: Yes. So surveillance specifically, I’ll use the specific example and say that when we talk about surveillance, we must be cognizant that it could be overt, right, or covert. And so let’s talk about surveillance within the context of fertility apps specifically. We’ll talk about surveillance in the context of fertility apps. And when we think about, well, we’re giving and pouring all our data in varying contexts in fertility apps, one of the things that transpired in the U.S., and I live in Canada, but this is my reference, is that imagine you’re having a private chat or you, for example, miss the period. And this is in a very 1984, Handmaid’s Tale-ish environment. And you have a system that is able to track all these things, surveil you in that context, right? But this is now real life, how do you absolutely ensure that what you’re doing in that space is not seen as punitive, right? 

And so a heuristic that would guide that, a categorical set of heuristics resides in that space of consent. So now I can now go in and say, oh my gosh, the laws have changed.I need to make sure that when I said to you Clue app or whatever other app that you can have all my data, I sort of need to take that back because I’ve missed the period, which may be deemed as I’ve had an abortion. Therefore, you know, come arrest me. 

So I think of these apps as something that really hands back to the user, the control and agency that we’ve lost in these seeming innocent interfaces that we just enter these information pieces just happily, not realising that there’s a recipe that could be cooked where we in that scene are the criminals, right? And so my consent of things like privacy by design speaks to just that, transparency, because all of this resides in a black box. We cannot see what is cooking down there about us to say, well, you know what? We surveilled you and you, Ryan, are the criminal, right? So all of these, but yeah, it’s a lot to take in. And I think my work really appeals not only to academics, but also to people who are like, what is going on? And so it’s a, I think, accessible way to say, “Hey, you know, these systems are not just innocently taking your data.” There is method and madness. There’s a madness in their methods. 

Ryan: I like the reverse of the phrase there. Well, and that seems like part of the way that these tools are rhetorical is they have so much agency in our lives.

Kem-Laurin: They do. They do. And what, you know, I’ve been practising for over 20 years, and, you know, I tell people, if you think about the long arc of what has happened in the space of human-computer interaction is that we started with something so good and holistic as human factors and ergonomics.

In about 2008, it was human-centred design. And sometime between that and about 2011, it was user-experience design. And then it became customer-experience design. And then it became service design. And what we’ve had with this trajectory, right, is the disappearing of the human being. And so we are now being controlled by these systems. And my, what I’ve said elsewhere is that we need to begin to break that glass, that UI glass, to see what recipes are being cooked beneath the surface that is actually characterising us and really having, in some cases, very detrimental material outcomes for some people. 

Ryan: Yeah, absolutely. And that connects really well. You’ve got a great quote in your Medium essay where you call on readers to, quote, “create a future where AI supports and enhances our human experiences rather than dictating who we are.” So what are some ways, because it’s now, I mean, I think everybody or most people I know are very nervous about AI for a lot of the reasons that you have mentioned and others. But sometimes it feels like it’s this overwhelming thing that we have no power to resist. You know, what are some ways where we can, you know, kind of take agency back or design more human-centred systems? 

Kem-Laurin: Yeah, I think about this a lot. I loved being a designer. I loved talking and having these conversations like what I’m having with you. You know, I think about, you know, when I was working at BlackBerry, leading the team and running the lab, you know, the kind of research that we did was, you know, when somebody’s connecting their Bluetooth in the context of a car, what is the potential harm if they’re doing this while they’re driving? 

And there’s a lot of data that’s been somehow lost. We also lost the logbook. I carried a logbook back in the days and that was just the way I held myself accountable. But also I was held accountable. What did you discuss at these meetings? And so, you know, just bringing this up to now in the context of artificial intelligence, we need to allow people to correct their AI-generated identities. It is no different than when you go to ChatGPT you say generate whatever and the person ends up with like 10 hands. You’re like, “Whoa, that’s not what I wanted.” And so the truth is that this is also happening in other contexts, is that it just misread you, it flattened you. And so also designing for humanity at large, like not everybody is the normal, right? Some people have, for example, maybe no legs. And so if they give you their weight in an application and you’re like, well, the AI says back to them, this is abnormal or you’re overweight without really understanding your context. You know, these are all things that should allow for some degree of intervention and also correcting from the person using the system. You know, at the level of advocacy, I can go on and on about this. I know it’s a 10-minute podcast.

Ryan: That’s all right. I don’t restrict everybody to just 10 minutes. So please do keep going.

Kem-Laurin: Oh, okay. So, you know, there is a big problem with artificial intelligence, as well as when we think about everything is in a black box, everything is somebody’s proprietary whatever. If you’re using my data and our collected data, we have a right, this is the commons, right? We have a right to demand that you open the black box to see what it is you’re cooking up about us, right? So in that case, you know, transparency is something we should collectively call for. 

I feel passionate about this. You know, I write about this. I posted on LinkedIn this morning. There’s a lot of work that is done in data sciences about correlations. Let me tell you, like, yeah, correlations are okay within that mathematical context, but let’s also understand causation, right? And so it’s a deep conversation, I think, more than we can have on this podcast. But when we think about, you know, creating these futures, we also need to have people in positions of power that understand the technology, that understand the impact, the human impact, and to make sure that this is a talking point in future campaigns, because this is something they don’t want us to see. It’s all in a black box, and we’re not smart enough to see it, right? 

Ryan: Yeah, well, you make a great point, which is, you know, this stuff is all black box, and they’re like, oh, it’s proprietary, but they’re using our data. Like, our data is the recipe that they’re using to cook stuff up. So it’s only fair that we know how our data is being used.

Kem-Laurin: Yeah, a friend of mine works at WIPO, the World Intellectual Property Organization. She was actually part of my summit last year. And I said to her, you know, Kamani, all this data that we have, like, who owns that? And so the entire presentation was on life data. So our biometrics and everything they take from us. And we need to also, I think, tap into, and I say that all the time, nothing moves without policy. We need to actually tap into that space where we can demand that back.

Like, what the GDPR has actually done with the Spanish, at least Spain, I don’t know if Germany has requested the same, is that the people in Spain have said, we as citizens, we have the right to be forgotten by technology. It can be made so, but it is by design, because we do not yet have, you know, voices in unison calling for this. So for me, that’s the kind of work I see myself doing, and I feel very passionate about.

Ryan: That’s fabulous. And I love the example of a productive policy, because again, sometimes it feels like this stuff is so overwhelming. So to be able to look at a situation of, you know, this kind of policy would probably help in the kind of AI that you’re hoping that we create. I appreciate getting a sample of kind of what this future might look like. 

Kem-Laurin: Let’s hope. I was recently listening to Dr. Ruha Benjamin, and she said something to the effect of that the tech bros, they’ve created a utopia for themselves and a dystopia for the rest of us. But it is left up to us to create an ustopia, right? And so that is the hope I have for where we’re supposed to be going. 

Ryan: Wonderful. Well, I really appreciate the work that you’re doing and the conversation that we got to have about it. Where can people find you if they want to hear more about your work, your ideas? 

Kem-Laurin: So I am on mostly Medium, LinkedIn, I guess. 

Ryan: And you mentioned a book. When is the book coming out? 

Kem-Laurin: So my second book just came out in July. It’s called Design Heuristics for Emerging Technologies, AI Data and Human Centered Futures

Ryan: Wonderful. Well, congratulations. You’re doing awesome work. I really appreciated talking with you about it. And thank you so much.

Kem-Laurin: Thank you for having me, Ryan.

Join the discussion

Subscribe

Episode 18