The premise of the 2008 dystopian movie Wall-E, if you can remember that far back, was that humans in the 29th century will have devolved into an inactive and grossly overweight species living on a spaceship (because they had trashed the earth). They were mostly relying on artificial intelligence for their subsistence.

As disturbing as that sounded back then, it may be somewhat better than where the world really is heading.

Obesity already is a problem in much of the developed world, but the creators of Wall-E never contemplated brain rot. That may be where AI is taking us, if we’re not careful, at least according to a new study out of MIT.

If we let AI do too much of our thinking for us, we may lose the ability to think for ourselves.

The research

Researchers paid 54 people ages 18 to 39 a nominal fee to participate in one of three groups. Each group was given the choice of writing essays based on prompts found on SAT tests. One group had to use their brains only — no internet sources or searches allowed. A second group was allowed to use Google’s search engine to help with the writing. The third group had to use OpenAI’s ChatGPT, and nothing else, to inform their writing.

All participants were hooked to EEG headsets that monitored activity in 32 regions of the brain while the writing proceeded.

These were not average people. They were recruited from MIT, Wellesley, Harvard, Tufts and Northeastern, all universities in the Boston area. Thirty-five of them were undergrads, 14 were graduate students, and the rest had either completed their masters or doctorate degrees and were working as researchers, software engineers or in other post-doctoral programs.

Despite this, over several months of writing timed essays, the ChatGPT group showed the lowest brain engagement and actually ended up resorting to copying and pasting directly from AI by the end. They “consistently underperformed at neural, linguistic and behavioral levels,” the study said.

Perhaps you can guess how the other two groups did.

The one that used Google exhibited a higher level of brain activity than the AI group, even though both were staring at screens for help. The ones who wrote off the tops of their heads had the strongest and most active brain activity, showing 55% more of it than the ChatGPT group.

A ray of hope

Finally, the ChatGPT and brain-only groups were asked to rewrite one of their essays, but to switch sides. The brain-only group could use AI, and the AI group had to use only their brains. The results ended up reversed, to some extent, with the former ChatGPT writers exhibiting more brain activity.

In other words, there is hope that AI’s effects could be reversed, at least among bright, curious and creative students at prestigious universities. But what about the rest of us? What about young school students with developing brains?

The study was released before receiving peer review. Lead researcher Nataliya Kosmyna told Time Magazine this was because the team felt the need to make this information public immediately.

The fear for children

“What really motivated me to put it out now … is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” Time quoted her saying. “Developing brains are at the highest risk.”

Well, hold onto your brain wave machines. In an executive order issued in April, President Trump established an Artificial Intelligence Task Force that, among other things, will “provide resources for K-12 AI education,” and “collaboratively develop online resources focused on teaching K-12 students foundational AI literacy and critical thinking skills.”

31
Comments

That could be hard if using AI robs youngsters of critical thinking skills.

To be sure, everything I’ve learned suggests AI can be helpful to humans, as long as they treat it as a tool and not a substitute for thinking. But everything I’ve learned about humans suggests we ought to worry about that.

Exhibit A could be the way social media treated the release of this study. Kosmyna told Time many people used AI to summarize and post the findings. She knows this because she installed traps instructing the language learning models to offer only limited insights.

Perhaps Wall-E got it wrong, after all. We may not make it to the 29th century.

Related
Perspective: Some are relating to AI as a God-like guide, but not me. Here’s why
Perspective: In the age of AI, we need character education more than ever
Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.