I like ChatGPT just fine. We’ve had some very pleasant chats, it and I. It’s unfailingly polite and encouraging. It knows ever-so-much about everything. And if it’s sometimes wrong … well, hey, it’s less than 3 years old. I didn’t know everything when I was 3, either.
But ChatGPT has a problem: It’s a vampire.
Oh, not in the traditional sense, perhaps. It’s not wearing black capes. It’s completely indifferent to garlic. And as far as I know, using it will not transform me into an undead creature of the night with a penchant for eveningwear.
But make no mistake: ChatGPT and other AI tools are using us just as much as we use them. They tap into our words, learning about us with each keystroke. They suck in all that information and use it to not just tell us what we need to know, but communicate it clearly, thoughtfully and with a hint of wit.
And as artificial intelligence seemingly becomes more “human,” we seemingly become less.
A bold statement? Perhaps. But new research suggests it’s perhaps truer than we’d like it to be. The Atlantic reports on an Australian study, where 320 people were asked to write a sofa advertisement. Those 320 people were then asked to read what ChatGPT had written when given the same prompt, and then rewrite their own ad copy.
The results, according to The Atlantic, were telling. ChatGPT tends to be more verbose. And after reading what the chatbot wrote, the folks in the study followed suit, nearly tripling their original word count.
It’s a small study but suggestive: ChatGPT used to follow our lead. Now, it seems, we’re following its lead.
Other studies suggest that ChatGPT and other chatbots may sap away our critical thinking abilities. In a story titled “ChatGPT Is Making Us Weird,” Business Insider reports that chatbots are twisting how we interact with it—and with each other.
“As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren’t only mimicking human interaction but quietly altering the expectations and norms that govern it,” writes Katherine Tangalakis-Lippert in another Business Insider article.
That may be especially true in the realm of language.
I Detect a Problem in Tone
I can’t write an email anymore without a cheerful AI prompt giving me all sorts of advice on how to write it better—or offering to rewrite it for me. When I post this blog, WordPress will tell me how to improve it for SEO and readability.
I’ve been writing most of my life. But according to the AI tools integrated into the technology around me, I’ve been doing it wrong all this time.
For me, that’s a little terrifying—not because I think I’m a great writer. Because … I am a writer. It’s not just what I do for a living; it’s a big part of who I am.
Just for kicks, I took the first paragraph in this blog and asked ChatGPT to rewrite it. Here’s what it came up with:
I’m quite fond of ChatGPT. We’ve had some genuinely enjoyable conversations, the two of us. It’s always courteous and uplifting, and it seems to know an impressive amount about just about everything. Sure, it makes mistakes now and then—but come on, it’s not even three years old yet. I didn’t have all the answers at that age either!
Not bad. Some might say it’s better than what I wrote. But it’s not me.
But then I asked ChatGPT this: “Can you write that in the style of Paul Asay?”
And because ChatGPT has plenty of examples of my writing to take from, it complied:
I’ve come to appreciate ChatGPT quite a bit. We’ve shared some surprisingly enjoyable conversations—me tapping on the keyboard, it responding with all the poise and patience of a seasoned butler who’s read every book in the library. It’s courteous, encouraging, and seems to have a fact (or five) ready for just about any topic I toss its way. Sure, it stumbles now and then. But honestly, who didn’t at age three? I certainly wasn’t quoting Shakespeare or solving math problems at that point—unless stacking blocks counts as calculus.
Stacking blocks. That’s a nice line. Wish I’d thought of it. And according to ChatGPT, I should’ve.
Thanks a Lot for the Existential Crisis, ChatGPT
When I was 8 years old, I took my first stab at writing a book. It was, unquestionably, horrible. But it was me. Every word reflected my awkward, weird, 8-year-old brain. And looking back, that also made it kinda great.
As I got older, I grew ever more self-conscious. I was painfully shy and hopelessly uncool. And throughout my adolescence, I leaned on my writing to get me by. I rarely had the courage to actually speak to anyone. But I could write a column in the school newspaper, and that was somehow better. I couldn’t look that pretty girl in algebra in the eye. But I could pass her a funny note in class and make her smile.
For me, writing has always been my preferred (and sometimes only) way to communicate. Even today, when I talk with people, I feel awkward and shy, like I’m always missing the mark. But on paper? In pixels? I’ve got a better shot of making myself understood. In face-to-face communication, I have a hard time expressing thoughts and feelings if they hit too close to home. On paper, I can share more deeply, more honestly. When I write, I’m me.
My writing’s not perfect. Far, far from it. But because I’ve relied on it for so long, it’s become a precious part of me, even in its imperfections. This blog—a blog you’ll spend 10 minutes reading and forget not long after—feels like it carries more of my essence than my nose or left foot. I could lose an arm and still be me. But if I lost my writing? Who would I be?
Still a Child of God
Listen, I get it. I know that a lot of people think writing lands somewhere along the spectrum of “chore” and “anxiety-inducing terror.” Plenty of you love that chatbots can take emails to your boss or grandma and turn them into something that sound better. ChatGPT can be a fantastic tool. But you, too, have something in your toolbox that AI is hoping to suck from you, learn from and spit back out as something “better.” Your songs. Your artistic doodles. Your strategic insight.
Those creative endeavors aren’t just products. They are us. They are part of who God made us to be. And as lovely a conversationalist as ChatGPT may be, it bothers me to think that it might write more like Paul Asay than I do.
ChatGPT and our growing world of AI can be creative vampires. They can take a good chunk of what makes us us and make it their own.
But it can’t go all the way. It can’t become us. And we, in turn, shouldn’t try to become it.
What makes us who we are? What makes us human? Why does God treasure us above all His glorious creations?
Perhaps it’s not in our gifts. Perhaps it’s in our love—our love for Him and for one another, expressed sincerely but imperfectly.
For a mother, a crayon-scrawled card from her daughter feels far more precious than a glorious work of AI art. A father would treasure a twisted, handmade birdhouse hammered together by his son than one churned out by a machine. We should remember that in God’s calculus, we’re not powerful executives or promising students or wonderfully gifted creators (even if those things are all true). We are 3-year-olds, stacking blocks. Helpless, sometimes clueless—but full of wonder and, hopefully, full of love. Whatever we do, we do for Him.
And that’s something that ChatGPT can never say.