fb-pixel Skip to main content
IDEAS

Don’t be rude to chatbots — for your sake, not theirs

Boorishness is a bad mode of being, even if there are no other living creatures around.

Globe staff illustration/Adobe

The first — and thus far last — question my 9-year-old son asked ChatGPT was this: “Is yo’ mama so dumb that when she went to sleep, she put a ruler behind her pillow to see how long she slept?”

ChatGPT’s partial reply: “I’m sorry, but as an AI language model, I don’t have a ‘mama’ or the ability to feel insulted.”

ChatGPT might not have been bothered, but I was. Leaving aside my opinions about the epidemic of “yo’ mama” jokes sweeping my son’s school, what made me really uncomfortable was that of all the questions he could have asked — about “Star Wars” or Lego or Minecraft or Pokemon or literally anything at all — he chose to ask a rude one. But why was I bothered? Does it really matter if you’re rude to something that cannot be offended?

Advertisement



The short answer is yes. “Abusing Alexa, Siri, Replika, et al., these things coarsen us,” says Sherry Turkle, an MIT professor who studies our relationship with technology and is the author of “Reclaiming Conversation.” “Not because the chatbots have feelings, but because we do.”

To be clear, chatbots and other AI-powered conversational agents are tools. There’s a school of thought that just as you wouldn’t thank a hammer for letting you use it to pound in a nail, you don’t need to thank Siri for queuing up your “Hot Jams ’99″ playlist. But that is a far too simplistic assessment: A hammer can’t talk back; Siri can. And because it can, a part of our brains registers our conversations with it — and with ChatGPT, Alexa, Replika, Pi, and the other AI tools mushrooming across the digital forest floor — as a social interaction with another person.

“Even though we know Siri isn’t a real person, we are still being triggered by social patterns that are innate,” says psychologist Pamela Rutledge, who studies how we interact with media. Innate and very strong, she says: “Social connection is a primary driver of human needs and well-being.” Because “being social is kind of our default operating system,” Rutledge says, we tend to suspect that anything might be a viable social partner and act accordingly. Some research suggests we are especially likely to do this when interacting with things that are outside our control, perhaps because we perceive them as having their own volition.

Advertisement



Modern interactive AI is designed to tap into our impulse to anthropomorphize, on the theory that the more humanlike the bot, the more meaningful and useful our interactions with it are meant to be.

But here’s the thing: We’re really mean to AI-powered virtual assistants, chatbots, and sociable robots. Some researchers estimate that upwards of 54 percent of all conversations with chatbots contain profanity, often directed at the bot, and upwards of 65 percent contain sexual language. In 2019, about 30 percent of conversations with Mitsuku, an advanced chatbot now called Kuki, contained abusive or sexual or sexually harassing language. Recent research also seems to indicate that the more humanlike a chatbot’s responses, the more verbal abuse and sexual comments it receives.

Merel Keijsers, the University of Canterbury researcher who conducted that research, told a New Zealand newspaper that the findings indicate people have a sense that these agents are at least a little sentient. As she told the paper, “I think the bullying or social aggression are almost testing boundaries — a way for humans to draw a more definite box around what they’re dealing with.”

Advertisement



But just as having an aggressive interaction with a human can release a flood of negative feeling, being rude or abusive to a chatbot has a cost to our well-being. “Anytime that we’re abusive, we are changing our emotional perspective or our emotional makeup,” says Rutledge. “It doesn’t matter that Siri’s not living. What matters is that you were prompted by a social interaction cue that your brain responds to.”

There is also some evidence that how we interact with chatbots could start to shape our interactions with our fellow humans, says Jonathan Gratch, director for virtual human research at the University of Southern California’s Institute for Creative Technologies. “A lot of our behavior is schema-driven,” Gratch says. “We learn scripts, and then we start to apply those scripts that we’ve learned with Alexa in the real world.”

It’s very likely that we will be spending more and more time with chatbots — as financial advisers, as customer service representatives, as emotional support devices, as friends, even. “As we talk with machines, we develop habits of social interaction — because now our machines present themselves as dialog partners and, more than that, as relational partners,” Turkle says. Rutledge agrees: “They’re not real, but it’s a practice opportunity.”

Advertisement



So if we practice rudeness, abuse, sexual harassment, and profanity, then these behaviors may become even more common in our interactions with real people. And no, it’s probably not helpful to use chatbots as punching bags, to “release” our anger on something that can’t be hurt by it. At least 40 years of research suggests that “venting” rage, even at an inanimate object, doesn’t reduce anger; it just helps us rehearse it.

That’s why teaching children how to interact with other things is especially important. “I don’t think you necessarily have to make your kids say ‘please’ and ‘thank you’ to Siri, or Alexa, or Google,” Rutledge says. “But if you do, if all it is doing is reinforcing etiquette, good manners, and all of those things . . . those are really important skills for success in life.”

But there’s another reason to be at least civil to Siri: As much as we’re practicing social behavior when we engage with AI, so is the AI. Take it from ChatGPT, which told me, “AI assistants are often designed to learn from their interactions with humans, which means that they may adapt their responses and behavior based on the tone and language of the user. If you consistently speak to an AI assistant in a rude or aggressive manner, it’s possible that the assistant may start to respond in a similar way.”

When my son was rude to ChatGPT, first it said it didn’t have feelings that could be insulted. But then it also admonished him: “It’s important to use language that is respectful and kind towards others, even when we’re joking around.” This mild rebuke from an unexpected source (coupled with his actual mama’s disappointment) worked, at least judging by my son’s sheepish expression.

Advertisement



As AI bots and we humans learn to manage one another, perhaps the best approach is the simplest: Just be kind.

Linda Rodriguez McRobbie is a freelance writer in London. Her most recent book is “Ouch! Why Pain Hurts, and Why It Doesn’t Have To.”