"The Real Brain Drain"

  • By Yash Soni (X-G)
  • Thu,15 Jan 2026

We use AI. Period. When Artificial Intelligence was introduced in 2019, all of us were full on dopamine. I personally tried ChatGPT within a day after hearing about some "Human-like" program writing words, sentences, paragraphs and even stories! It was a utopian dream come true. Really, is it not magic? For a while, I was left perplexed, so was our school, so was the globe. It is very difficult to understand—now that we use AI everyday—the gravity of this human leap. It became a hot topic that whether AI is conscious or not. Now that we have all used it, I can confidently say it is not. Though it exhibits the patterns of Agentic-Misalignment and Counter-Chinese-Room-Argument (hey, that's a topic for another day!) we cannot wholeheartedly say that it is capable of cognitive function like humans. But this article is focused on a more serious matter. The magic, the utopia and the dopamine are turning—expectedly—into their worst forms, often to a level that I am forced to believe that some inventions, especially AI, were best left un-invented. I prohibit the use of AI upon myself. This simple act has led me to make this article. As a person highly invested in extra-curricular activities, I began noticing a sharp increase in productivity and self-control after prohibiting AI usage upon myself. All this is—again expectedly—explained by multiple studies. A new MIT study titled, "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task." assigned participants to three groups: LLM group, Search Engine group and Brain-only group and asked each of them to write essays. (The humongous 206 page study will be found here: https://arxiv.org/pdf/2506.08872) They first compared the connectivity of the brain (higher is better) and the results—again expectedly—placed Brain-only group as the highest, followed fairly closely by Search Engine group. LLM group lagged far behind. LLM group also experienced under-engagement of critical attention and visual processing networks when asked to write without the use of AI. A very striking observation was noted: LLM users forgot what they just wrote. In interviews, after writing sessions, 83.3% of the LLM users were unable to quote even a single sentence from the essay they had *just *written. Out of all the participants, even those who attempted to quote a sentence, 0% did it correctly. I myself was astonished by this result, because seriously, out of 18 participants, not even one was able to correctly quote a single sentence. This was perfectly contrasted by 88.9% of Search and Brain-only users quoting multiple sentences with surprising accuracy. I will just throw in one more fact before moving on: The damage to LLM group's brain—again expectedly—lingered. They failed to return to their original brain activity patterns even after the use of AI was stopped. Researchers noted a trend towards neural efficiency adaptation: the brain essentially “lets go” of the effort required for synthesis and memory. This adaptation led to passivity, minimal editing, and low integration of concepts. Now, moving on to the second part. Why do I term this as The Real Brain Drain? Brain Drain, as we define it now, is focused on certain developing countries. While some benefit, some do not. It is also heterogeneous across age groups, with older groups being affected less. This pre-existing and pre-defined problem is, in its nature, unavoidable, but fixable. You know where I am going with this. I want to point out that all of the other problems, including wars are external threats. The Real Brain Drain is not. It remains the same across all nations. Addicting citizens to the worst things they can do to themselves. Primarily younger Gen-Zs and Gen-Alphas. I do not ask for AI to be age-restricted. We all know how the Aussies are panicking; banning or restricting activity on the surface web only leads to transfer towards dark web, and when people start using dark web, it becomes essentially impossible to trace their activity for even the best-funded government programs. I also do not ask AI to be banned. Anyone with even a little sense of technology (one who knows how to install apps) can download open-weight models for free and use them. Many privacy concerned humans, including me have started to use Mistral and Llama, offline (it has benefits). The mass adoption of open-weight models is bad because, unlike me, many use this feature to get access to uncensored versions of AI, which often reply with harmful content, prompting people to take extreme steps. The solutions to this problem, in itself, is utopian. You cannot stop the brain drain, now that mischief is afoot. Embrace the dystopia (but at least try to convince someone to use AI less, like I did).