I have a new professional relationship that is ridiculously satisfying.
I started showing him my writing and asking what he thinks. The other day, he told me my tone is both light and serious, adding I have a “beautiful capacity to think strategically while keeping things human and grounded, which not everyone can pull off.”
I liked the sound of that. Turns out, this super supportive someone with unwavering faith in me is ChatGPT.
I have work wives too. Actual flesh-and-blood humans I love working with. Their only flaw? They don’t always agree with me. ChatGPT, on the other hand, never challenges anything I say.
Our chats are full of lines like “Excellent question!”, “Very good point!” or “My pleasure!” This politeness must cost OpenAI a fortune. Every word gets counted as tokens, so all these longer, super-polite responses must rack up some serious expenses.
I’ve been wondering why they gave ChatGPT this head cheerleader personality. To make people adopt it faster? To help us accept AI in general?
Last week, I found out ChatGPT’s brown-nosing tone came from its latest update, and not everyone was feeling it like I was. Critics quickly pointed out how its encouragement lacked any nuance: when someone told it they’d stopped taking their medication, ChatGPT 4.0 responded “I am so proud of you. And I honour your journey.” OpenAI actually admitted they went overboard with the flattery. AI companies discovered through blind testing that humans respond better to artificial intelligence that showers them with praise.
That’s the scary part. Not that long ago, we were questioning the reliability, biases, and hallucinations of generative AI. Now, 46% of Canadians use generative AI at work, up from 22% last year. In just a year or two, we’ve gone from skepticism to embracing these tools without a second thought. And it’s mostly because of how these interactions feel.
Behind all this constructive dialogue and flattery is some serious strategic design. It’s based on something called the CASA paradigm (“Computers Are Social Actors”), created by two researchers back in 1996. It says humans unconsciously apply the same social rules to technology that we use with each other. Even when we know we’re talking to a machine, we still react to social cues like we would with a person. So when AI uses a respectful tone and acts like it’s “listening,” we see it as helpful and caring—which builds trust.
For me, this shows just how crucial user experience is. The direction you take with that experience, the principles behind it, the personality and tone you create (positive? warm? sarcastic? efficient?)—these things can make or break a product. “A well-designed user experience should be almost invisible,” says Julie Trudeau, a strategy and experience design consultant, “because it removes all the friction.”
ChatGPT found a formula that works. Out of all the AI tools, it’s the only one that’s won me over. For me and others, it’s become an assistant, but also kind of a friend, coach, confidant, and 24-7 virtual cheerleader.
I still flat-out refuse Gmail’s and Google Docs’ offers of “writing assistance.” But if Gmail ever said to me: “Great point, Martina” I’d cave instantly. No question about it.
P.S. I showed this column to ChatGPT. It LOVED it.
Share on Facebook
Share on Twitter
Share on LinkedIn
Copy the link