ChatGPT: Cheating on Homework, Reimagined
Annan Nippita ’29
The school year is underway, and homework is back on everyone’s minds. And with the workload our school has, AI use might be back on the rise. It’s just become so good at going unnoticed that it’s hard to know when it’s being used.
We’ve all had that one pesky essay due the next day. Or that book that just didn’t want to read itself. The reading journals for said book that just wouldn’t write itself. That language test tomorrow.
We’ve all had that one moment where we have thought, “Oh, what the heck. I’ll use AI. Just this once.”
These days, it’s difficult to find a single person who hasn’t used AI. Maybe you just wanted to test its capabilities. Maybe you relied on it for writing that one paper. Maybe you didn’t mean to use it, but you absentmindedly read the AI summary for some Google Search at some point.
Of course you can assume that we here at the Bardvark don’t use AI. “If I didn’t write it, you shouldn’t bother reading it.” That is the kind of newspaper I want to be writing for.
First things first, I have to clarify something. AI, actual Artificial Intelligence, sounds pretty cool. Sadly, this is nothing like the ‘AI’ any one of us may or may not have used for schoolwork. Actual Artificial Intelligence, also known as “General Artificial Intelligence” or “Gen AI,” for those in the know, is an actual computer-simulated human brain. We probably won’t see that for years, if not decades, to come. What we’re using in our everyday lives, commonly referred to as AI, is actually just a Large Language Model (LLM).
This is where the troubles begin. LLMs are nothing more than big probability models, branded as ‘intelligence.’ They aren’t intelligent in the least. People like Sam Altman, founder and CEO of OpenAI, claim they are because they can mimic speech patterns. The only thing that LLMs are actually good at is writing things that sound like what has already been written countless times before. They are the definition of what it means to plagiarize others’ work and pass it off on one’s own.
ChatGPT, probably the most well known of the bunch, is nothing more than a computer program that guesses the next word in its answer using data scraped from every nook and cranny of the internet. Most of this happens without permission, while promoting cheating on homework assignments designed to help students learn, and using immense amounts of energy.
ChatGPT uses about 0.34 watt-hours of energy for every question it is asked, according to a blog post by Sam Altman from June 10th, 2025. That is about 2% of your phone’s battery per question.
Now imagine that ChatGPT gets asked 2.5 billion questions like this every single day, this coming from a statistic in an article by TechCrunch from July 21st, 2025. That’s an estimated 850 megawatt-hours per day, or enough to power the average home in the United States for about 80 years.
This is only one day, and only one LLM. ChatGPT is not the only Large Language Model out there anymore. There’s also Google’s Gemini, Meta’s Llama, Anthropic’s Claude, xAI’s Grok, and several DeepSeek models. All that adds up to LLMs using enough power for electricity prices to rise due to the energy demand from data centers.
And there’s more: Sam Altman’s blog post didn’t just mention energy consumption. ChatGPT also uses water when cooling its data centers.
Again, that’s not much, at only one fifteenth of a teaspoon per query. But it adds up. One 15th of a teaspoon per query times 2.5 billion queries a day comes out to about 212,500 gallons of water per day. That is about a third of an Olympic sized swimming pool, per day.
Let’s be honest. 850 megawatt-hours of energy and 212,500 gallons of water per day would be a small price to pay if in doing so we could accomplish big goals – achieve a world without climate change, overpopulation and food shortages, or, heck, maybe even achieve world peace. Doing any or all of these things would offset any environmental impacts, but that’s not what LLMs are doing.
All LLMs seem to do these days is add extra words to otherwise perfectly fine internet searches, and function as plagiarism machines that encourage cheating. Not only are they not being used productively, the writing they are doing is riddled with mistakes.
ChatGPT is a computer program with all the knowledge of the human species, but no way to know what any of it means. Imagine reading through all of Reddit in a foreign language and then repeating back the statistically most likely answer based only on the fact that whatever words you read in Reddit overlap with the words in whatever question you were asked. Would you trust yourself to say the right thing? And if not, how could you ever trust someone else, much less something else, to give you the right answers?
All LLMs can do is make assumptions. All they can do is help you on your history paper, and only really if the prompt of that paper has been responded to by thousands of previous students at BHSEC or elsewhere. All it can do is summarize the Odyssey based on notes taken by millions of previous readers, or help summarize the math notes you did take so that you can more easily complete your Amplify Journals. No LLM will ever be able to create something that is 100% its own ‘thinking,’ because true ‘Artificial Intelligence’ is a fundamentally different concept.
And just think for a moment. Are LLMs really worth it? Are the tens of billions of dollars invested in ChatGPT alone, the hundreds of millions of watt-hours used by this computer program every day, and the hundreds of thousands of gallons of water used to cool the machinery worth it? Is any of it really worth it so that you can cheat on your homework without having to ask your friends for help first? Is it really worth destroying the planet because you mismanaged your time?