Open this photo in gallery:

If you trust AI to decide which car to buy, Leah Eichler wonders, at what point does it become a Magic 8 Ball to shake for all major life choices?Michael Dwyer/The Associated Press

When theologian Peter L. Berger coined the term “cognitive surrender” in the early 1990s, he used it to describe people of faith who bend their beliefs in the face of a multicultural, pluralistic society. They were surrendering their religious identity to get along with their neighbours.

Thirty years later, the term has made a comeback – this time capturing a more sinister surrendering of our ability to, well, think. If Descartes was right that “I think, therefore I am” expresses an existential truth, we should brace ourselves for the widespread identity crisis that’s coming.

The term “cognitive surrender” was resurrected in a Wharton study published in February called “Thinking – Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender.” Participants were asked to answer questions requiring logic and reasoning, which people using only their brain got correct about 50 per cent of the time. When test subjects were offered the chance to ask ChatGPT for help, they did so quite frequently – but secretly the researchers had manipulated the chatbot’s back end to introduce inaccuracies. Participants largely went along with ChatGPT’s answers, mistakes and all, with little skepticism.

“The reality is, AI is very good at a lot of things across a lot of domains, so you can outsource thinking pretty much in any task that you want,” said Steven Shaw, the Canadian-born co-author of the paper. “Is that a good idea? Well, that depends.”

The “depends” here contains volumes. What’s really at risk if we don’t think through the right answer ourselves while taking a test? You’re not less of a person for placing your trust in a technology that has access to countless libraries to answer a question. But the risk comes when the delegation begins to supplant your individual decision making. If you trust AI to decide which car to buy, or if you should break up with your boyfriend, at what point does it become a Magic 8 Ball you shake for all major life choices?

You talk, it types: How a Toronto AI startup hopes to kill the keyboard

Cognitive surrender comes after a long history of humans engaging in cognitive offloading. Imagine driving without your GPS; it’s almost unthinkable nowadays. We still have our hands on the wheel, but we’ve handed off the mental energy required to navigate streets we often know very well.

The problem with this type of surrender is that it happens so gradually it’s easy to miss when it becomes total. You start off having AI help spot your grammatical errors and before you know it, you’re accepting its offer to rewrite your assignment.

“I kind of look at this moment like our calculator moment,” said Ali Etemad, associate professor of electrical and computer engineering at Queen’s University. Decades ago, when the calculator became common, people were concerned that students wouldn’t ever know how to perform mathematical functions. In some ways, he says, this did occur – leaving open the question of what functions AI will replace.

“In the same way that I probably can’t do the square root of 282, if we extrapolate into the future, will there come a time when we can’t do any reasoning without AI?”

AI allows us to bypass what behavioural scientists call “System 2” thinking – the process of slow, deliberate reasoning that is the hallmark of critical thinking. That skill, like mastering the piano, requires constant use to maintain. And only three and a half years after ChatGPT was released, there’s already evidence of our collective loss of skills.

A study published last year in the The Lancet showed that endoscopists suffered “deskilling” after repeated exposure to AI-assisted endoscopy procedures. The AI-assisted procedures often had better outcomes; that legitimizes their use in the short term, especially if you’re a patient, but in the long term, what will it mean for a doctor’s confidence or ability to mentor new residents?

AI’s theft of the em-dash isn’t just disheartening – it foreshadows our downfall

It’s not only medicine. Anthropic will soon release AI tools for financial markets, with the ability to assemble pitches, prepare for meetings and read earnings statements – potentially replacing junior roles. At what point are we just the meatbags pulling the levers? Will there even be any levers?

Even more frightening is the cognitive surrendering of soft skills that require human experience. I have intelligent friends who cross-reference their therapist’s advice with Claude, with the goal of one day replacing their human mental health worker. Others I know have considered making an AI friend in order to have someone new in their life to complain to.

In other words, the trust we once put in friends and mental health practitioners, or even rabbis, imams and priests, to help us navigate difficult decisions in our lives is now, for some, being surrendered to AI.

If large language models are really the synthesis of a broad collective’s thoughts and ideas, is it really such a stretch to ask them to solve all our problems? What if by surrendering more of our thinking processes, we are not only making AI more human, but making ourselves less so?

“It’s like that cliché – be careful who you surround yourself with because you are an amalgamation of the five people you spend the most time with,” said Victoria Hetherington, author of the book The Friend Machine: On the Trail of AI Companionship. “If AI is one of those people, who are we becoming?”

Cue the coming mass identity crisis. Whatever you do, just don’t ask Claude to solve it for you.

Leah Eichler, a self-proclaimed word nerd, writes regularly about our evolving use of language.

Share.
Exit mobile version