I was 11 years old when Texas Instruments released the TI 2500, one of the first pocket calculators. While I remember one around the house — it might still be in or on my father’s desk — it wasn’t mine. My father was generous but he wouldn’t have shelled out $150 (a lot of money in those days) to save me some mental effort.

That wasn’t an issue because my teachers banned the use of calculators at home and at school. They worried that relying on those devices would undermine development of our own ability to do calculations, even relatively menial ones. Being a goody two-shoes and having a grandfather who delighted in teaching me how to do those operations in my head, the ban wasn’t a big deal.

That old concern is resurfacing with the proliferation of artificial intelligence apps and bots. A growing body of evidence affirms what should be obvious: The brain is a muscle and like any other muscle, it atrophies with disuse. The spread of tools that do our thinking for us is, naturally enough, weakening our ability to think.

It’s a problem with a long history. Most important inventions that facilitated the thought process have raised objections (almost as many complaints as those concerning the music of the previous generation). So, for example, Socrates denigrated writing; in the 15th century, the German Benedictine abbot, Trithemius, dismissed printing; as did his contemporary Filippo de Strata, a Benedictine monk who wrote a treatise against printing titled “Polemic against Printing.”

More than 40 years ago, Lisanne Bainbridge, a cognitive psychologist, wrote “Ironies of Automation,” the seminal paper on ways that technology — she focused on automation — undercut our ability to deal with problems. They may be labor-saving devices, but those tools can hurt as much as help. When she assessed the impact on brains of those devices and processes, she realized that “efficient retrieval of knowledge from long-term memory depends on frequency of use (consider any subject which you passed an examination in at school and have not thought about since).”

She recognized that increased reliance on machines meant that their operators would often be even more lost when something went wrong. Their skills shifted to monitoring devices and they subsequently lost the ability to do what the machines were doing — their old jobs. So, for example, pilots were becoming more adept at interacting with the technology that ran the cockpit than actually flying the plane.

The tensions that her work exposed have since increased. The development of new computing capabilities and the downward spiral of the cost curve have put once-unthinkable capabilities in the hands (and brains) of ordinary citizens and the impacts have been no less formidable or disturbing.

Over a decade ago, researchers identified “the Google effect,” which showed that digital search engines promote a tendency to forget information that is readily available online. This is sometimes called “digital dementia,” but that term usually refers to a more general decline in cognitive abilities caused by use of digital technology.

Mathura Shanmugasundaram of Harvard and colleague Arunkumar Tamilarasu detailed in November 2023 a depressingly long list of ailments created by overreliance on technology. The disabilities include memory loss, attention deficit (the increasing ease of distraction or the corresponding difficulty of sustaining attention), reduced ability to communicate and impaired decision making. Of course, the list of potential pathologies is far more extensive if we want to take in the entire range of potential harms, such as depression, isolation and others. Here, though, I am focusing on harms related to cognitive function.

Not surprisingly, the more time spent in cyberspace reduces the ability to navigate the real world. The harms range from a diminished ability to recognize facial emotions to decreased empathy. Internet addiction impairs white matter fibers that connect parts of the brain involved in the creation of emotions and cognitive control. Researchers have linked use of GPS systems to sharp declines in hippocampal-dependent spatial memory and there is evidence that even taking digital photos can decrease accuracy of recall for details of images.

Algorithms are rightly criticized for doing their job too well, tailoring users’ options in ways that narrow perspectives and promote confirmation biases.

Artificial intelligence has accelerated that process. It seems like every week new work highlights the dangers that AI poses to human brain function. Even though the technology is in its adolescence, psychologists have already detected a decrease in critical thinking, analytical thinking, decision-making and problem-solving abilities as a result of AI use.

For example, a paper that will be published next month by researchers affiliated with Microsoft found that the greater a user’s confidence in generative AI (GenAI) the lower that person’s tendency for critical thinking. Rising confidence in AI also encourages over-reliance on AI and diminishes independent problem-solving.

That complemented the dismal conclusions of scientists at the University of Edinburgh and others also affiliated with Microsoft who last year found that AI created many of the same problems identified by Dr. Bainbridge some 40 years ago. “In the context of GenAI, users’ roles have shifted from producing output to evaluating it, often with little contextual information and situational awareness.”

This “shift from production to evaluation” means that workers must now assess far more information than before — because AI works so much faster — with less explanation and understanding of outcomes. This results in “complacency, over-reliance on systems and increased errors.”

The supreme irony is that workers need to use their cognitive abilities in low-stakes scenarios — the ones for which AI is most suited — to maintain their skills and be ready to evaluate AI product in high-stakes situations.

In sum, “excessive dependence on AI without concurrent cultivation of fundamental cognitive skills may lead to underutilization and subsequent loss of cognitive abilities.” To put it more simply, the brain is like any other muscle — use it or lose it.

This might sound like another grumpy old man rant, but John Burn-Murdoch, a Financial Times columnist, notes that “across a range of tests, the average person’s ability to reason and solve novel problems appears to have peaked in the early 2010s and has been declining ever since.” His initial data point was Organisation for Economic Development and Cooperation (OECD) international benchmark tests in reading, mathematics and science for 15 year olds, but he quickly found similar results in other assessments.

The grumpy old man would complain that kids don’t read books anymore, but Murdoch wonders if the problem isn’t a decline in reading so much as “a broader erosion in human capacity for mental focus and application.”

Sarah O'Connor, another FT writer, bemoaned the dwindling cognitive skills revealed by another OECD assessment of literacy, numeracy and problem-solving skills of 160,000 working-age adults in 31 different countries and economies. Across the entire survey set, proficiency in literacy improved significantly in only two countries (Finland and Denmark), remained stable in 14 and declined significantly in 11. (Japan was one of the best performers, with high skills in every group.)

Both Burn-Murdoch and O'Connor warn that the problem isn’t a function of education but instead reflects a fundamental change in our relationship with information. There’s just too much of it to process effectively. The tsunami of content demands new coping mechanisms and behaviors. There isn’t time for critical analysis, so passive consumption prevails.

This shouldn’t come as a surprise. The development of modern society and the state structure that sits atop it have been driven by the need for increasing specialization to address the swelling flow of information. This call for expertise is now being assumed by technology and what were once considered labor-saving devices are now shouldering mental loads as well.

That would be OK if we maintained the capacity to separate fact from fiction and were able to draw lines distinguishing right from wrong. The problem is that we have given up on our critical facilities to do so as this evolution has unfolded. The real question, I guess, is whether we will welcome that abdication or fight it.

Brad Glosserman is deputy director of and visiting professor at the Center for Rule-Making Strategies at Tama University as well as senior adviser (nonresident) at Pacific Forum. His new book on the geopolitics of high-tech is expected to come out from Hurst Publishers this fall.