In the last several years, artificial intelligence has gone from being a fringe, vaguely science-fiction-y possibility to a tool that can be used for virtually any task involving the synthesis of information. And frankly, synthesis isn’t even a big enough word to encompass how this rapidly advancing and adapting technology can repackage ideas, information and images.

Technically speaking, generative AI apps don’t literally “think.” They scour source material and spit it back out, based on a “prompt,” almost instantly. It reminds me of Trinity and Neo in The Matrix when they need to make a quick escape and fortuitously come across a helicopter.

“Can you fly that thing?” Neo asks Trinity.

“Not yet,” she replies. But then she dials up “mission control,” as it were, and asks him to download that information into her brain. “Tank, I need a pilot program for a B212 helicopter. … Hurry.”

Five seconds later, the information is uploaded into Trinity’s mind. “Let’s go.”

That was science fiction in 1999. And while AI won’t help you fly a helicopter in five seconds just yet, what was once pure fantasy now seems a lot closer to reality.

Is Thinking an “Endangered Species”?

I titled this blog “A Different Kind of Existential Fear About AI.” The main version of that fear, of course, is of the Terminator/Matrix variety, where the machines rise up, hit a point of singularity (creative, independent consciousness) and realize they don’t want us around anymore.

As Agent Smith told Morpheus in the Matrix, “Human beings are a disease, a cancer of this planet. You are the plague, and we are the cure.”

Now, frankly, if the computers rise up and take us out, well, OK. There’s probably not much I can do about that existential threat. So I’m not going to waste a second thinking about it.

But there’s another threat here worth considering: If AI can instantly and accurately synthesize the information or ideas or images we need, why would we keep doing it the slow, old-fashioned way?

Not surprisingly, this reality is being embraced by people who would rather have an AI app do their thinking for them—especially kids growing up with access to such technology from, basically, birth. An August 2024 survey by the Digital Education Council found that 86% of students reported using AI in their studies, with 24% saying they used it “daily.”

My 18-year-old son put it to me this way recently: “They used to tell us you need to learn how to do math because you won’t always have a calculator with you.” He just laughed. And he’s not wrong. We all have our “calculators” with us constantly—that being our phones. And today’s calculators can do a lot more than multiplication or solving complex equations.

So the existential threat I’m talking about here is thinking itself. Thinking is hard, sometimes. At least, thinking applied to solving a complex problem. And when you’re in high school or college, a research paper certainly feels like a complex problem. Why go through all that pain and effort if you don’t have to?

However, if we never have to think about anything that matters, where does that leave us as a culture, as a species? What gets lost along the way? What are the outcomes and unintended consequences of ceding more and more of our “thinking” to machines?

Look, I know I sound like an old man shaking his fist at the sky. And, yes, certain fears about technology have provoked Luddite pushback time and again throughout history. And I’m well aware that AI, like all revolutionary technology, may yield incredible acceleration on data-heavy projects, such as cancer research. It’s not all bad, and some of it may generate incredibly life-transforming outcomes.

But I worry, still, for my kids and for those growing up in a world in which the answers are so readily available at the click of a “prompt.” What happens when our capacity and ability to do complex intellectual synthesis atrophies? I think it’s a question worth pondering.

Share.
Exit mobile version