by Miles Goodale
“How can I tell what I think till I see what I say?” – E.M. Forster
I began working at the Writing Center in 2023, my sophomore year of college, and since then I have noticed a change in the conversations that are had about AI and how students utilize AI. With the introduction of AI Overview to every Google search and release of Chat-GPT 4, I have had fewer people coming to me for help and, those that do, are often wanting me to provide them with the “right answers.” Although this problem isn’t anything new for writing centers, I feel as though these types of sessions have only increased. It’s not just me that senses a change – my coworkers, professors, and peers have noted a shift in attitudes around writing.
For those of us immersed in academia, we view writing as a way to think, an extension of our brains that allows us to comb through a subject. With the increased adoption of AI, students don’t have to do as much mental work for their academic writing. Reducing the mental load from writing is seen as a good thing to many students but I can’t help but wonder, is AI harming our critical thinking skills?

Scientific Backing
When we use AI to generate ideas for a paper or find evidence to support claims, we are using AI like a crutch. In the short term, the crutch of AI will provide relief and make life easier, but relying on the crutch too long can cause our muscles to deteriorate, making it more difficult to go through daily life without the crutch. The idea of AI acting as a crutch isn’t just anecdotal – researchers have found that when students use AI (like ChatGPT) to write an essay, their brain activity is significantly lower than students who used only search engines (like Google) or no tools at all (Kosmyna).* These students were also generally not able to quote correctly or at all from their essay (Kosmyna 137).
These findings suggest that using AI to assist in the front end of the essay writing process may hinder cognitive processing, retention, and engagement with written material (Kosmyna 138). Additionally, the ease of use that AI creates diminishes users’ inclination to critically evaluate the AI’s output, which is influenced by the priorities of the AI systems shareholders and is not regulated for accuracy (Kosmyna 143).
However, when integrated in the back end of essay writing (like editing and revision), AI usage activates similar brain activity to those who did not use any tools (Kosmyna 139). As a consultant, I see this correlation not as an indicator of the efficacy of AI use in revision, rather evidence that revision with a secondary person or artificial assistant, can help strengthen critical thinking.
*While the study is yet to be peer reviewed and the findings should be taken cautiously, the researchers felt that, due to the exceptionally fast expansion of AI, the study should be released as soon as possible to provide guidance on policy decisions regarding AI use in schools (Chow).
Ethical, & Environmental Considerations
As an Environmental Studies major with a special interest in Generative AI, I fear that not enough people understand the impact AI has on our environment. The three main concerns are water consumption, energy use, and carbon emissions (Zewe). Data centers, which run AI systems and internet applications like Gmail, require exorbitant amounts of water to cool their servers (Ren). These servers are running 24/7 and generate lots of heat–think, like when you use your phone or computer too long and it starts to get hot. Now imagine that multiplied across thousands of hot phones or computers all in one building. That’s a lot of heat.
The most common way to cool data centers is through water; however, this water has to be free of impurities, meaning data centers are competing for the same water we drink. In places that are drought-prone, this creates a large issue. In The Dalles, Oregon, after the city lost a suit to keep the amount of water the three Google data centers in the city consumed concealed from the public, it was revealed that the three data centers consumed a third of the cities’ publicly available water (Berreby).
Data centers are disproportionately placed in communities that do not have the ability to fight these large conglomerates. These communities are often rural, made up of low-income and minority populations–people who have the least political power and fewest resources (Pam). In Boxtown, a predominately black neighborhood in south Memphis, Tennessee, Elon Musk is running unpermitted methane turbines to power the supercomputer that hosts Grok. These turbines release formaldehyde and other concerning pollutants, threatening to worsen the industrial pollution dealt with by the residents for decades (Hilt). This isn’t a secular issue. Boxtown is a microcosm of what is happening across the nation, a direct consequence of the rapid adoption of Generative AI.
The impacts of Generative AI may feel far removed from the everyday user, but concerns about privacy in this AI age are something every user can relate to or understand. While the privacy risks posed by Generative AI are not anything new, the scale of unrestrained data collection is hitherto unseen. The main concerns are Generative AI tools taking your data (text, images, etc. that you post online or send to an AI system) and using it to train itself (without letting you know or asking permission), and the use of AI to memorize personal and relational data about people which can then be used to for identity theft, fraud, or scams. These privacy issues are just the tip of the iceberg (Miller).
So, What Now?
It is my opinion, based on personal experience and research, that students who use AI to complete their work aren’t actually learning anything. As people rely more and more on AI systems to inform their thought processes, people will stop thinking for themselves. Therefore, students should make an effort to resist overreliance on AI (completely cutting AI out of your life is becoming impossible as it becomes integrated into everything we do).
I’m not advocating for people to stop using AI, but instead to gain an awareness of their relationship to these models to better inform their choices. I believe that the least harmful way for a student to use AI would be in the revision process but, as suggested by Kosmyna, the job that the AI performs in revision aid could be accomplished with the help of a peer, teacher, or writing center consultant. Ultimately though, it is your choice whether you use AI.
Works Cited
Berreby, David. “As Use of AI Soars, So Does the Energy and Water It Requires.” Yale Environment 360, Yale School of the Environment. 6 February 2024.
https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions.
Chow, Andrew. “ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study.” TIME, TIME USA, LLC., 23 June 2025,
https://time.com/7295195/ai-chatgpt-google-learning-school/.
Hilt, Eric. “Elon Musk’s xAI facility is using gas turbines in South Memphis, we’re taking action.” Southern Environmental Law Center. 17 June 2025. https://www.selc.org/news/resistance-against-elon-musks-xai-facility-in-south-memphis-gets-stronger/.
Kosmyna, Nataliya, et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task.” arXiv, arXiv e-prints, 10 June 2025,
https://arxiv.org/pdf/2506.08872v1.
Miller, Katharine. “Privacy in an AI Era: How Do We Protect Our Personal Information?” Stanford University Human-Centered Artificial Intelligence, Stanford University. 18 March 2024. https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information.
Pam, Emari. “How AI is Fueling a New Wave of Environmental Racism.” Feminist Majority Foundation. 29 July 2025. https://feminist.org/news/how-ai-is-fueling-a-new-wave-of-environmental-racism/.
Ren, Shaolei. “How much water does AI consume? The public deserves to know.” OECD.AI, OECD. 30 November 2023.
https://oecd.ai/en/wonk/how-much-water-does-ai-consume.
Zewe, Adam. “Explained: Generative AI.” MIT News, Massachusetts Institute ofTechnology. 9 November 2023. https://news.mit.edu/2023/explained-generative-ai-1109.


