Generative AI is nothing but another innovation | The Triangle
Opinion

Generative AI is nothing but another innovation

May. 2, 2025
Photo by Gabby Rodriguez | The Triangle

I was recently perusing the Triangle website when I stumbled upon an interesting little article by Sam Gregg about the dangers of ChatGPT. While this is not the first article that has popped up in the last couple of years as tech panic surrounding generative AI has been on the rise, it does make an interesting case about diminishing the human drive to learn and search. This is an overly pessimistic take on a minor danger of generative AI.

Do not mistake me for a zealot of the temple of OpenAI, but I do not believe the prime danger of platforms like ChatGPT is the loss of our humanity. Some generative AI models have been utilized to provide information quickly and easily to users, and the argument could be made that it kills the human desire to search deeper on any given subject. The truth is, this argument neglects the fact that Google made finding information far easier than ever before but did not diminish the drive to learn. We may not be digging through library stacks anymore, but that does not mean we have lost anything more than inconvenience. It is not uncommon to assume that convenient technology begets laziness–even Socrotes hated the written word fearing human memory would begin to atrophy. That being said, convenience does not mean that the ability to search and learn is lost with each new innovation.

While generative AI might provide simple and easy-to-understand explanations, it cannot provide the deeper understanding that a person would need for meaningful applications. For example, the AI-generated blurb at the top of a Google search gives a simple explanation of how to drive a car, but it will not get me anywhere close to my license. That would require much more research and a personal desire to learn. Learning will still occur to those who desire. The only people who may lose their desire to learn are the ones who barely had it to begin with.

Academia has been a threatened space since the very birth of ChatGPT, with the fear that students who use generative AI for their assignments are not properly learning the material. The rise of AI checkers mitigates a portion of this danger, however, even if they are not perfectly accurate, it is also a self-correcting problem as ChatGPT cannot sit in the lecture hall for a midterm. Furthermore, those who would use generative AI for unethical practices would have used the old-fashioned method of looking up Quizlets, going on Chegg, looking up the answer sheet, etc. ChatGPT is only a marginally better cheating assistant for an exam than Google. The cases where AI and only AI can be used to cheat on an assignment would be so few and far between that the “dark age for the internet and humanity”, as Gregg claims, is far from the future we will live in.

Therein lies the rub with worrying that generative AI will take over every field and diminish the learning and innovation of humanity: AI is only ever going to be perfectly average. That is the goal of any user data-driven model, and the very reason there is fear of generative AI “taking over”. The argument is that it is less capable than human effort as it lacks the creativity, expertise, and “humanity” of a human. That means it would also be less capable and less effective at any given task than a human who is driven or even mildly educated or experienced. The only situations where generative AI seems like a viable option are when the task is mundane or otherwise simplistic, where average work for cheap is the answer. If I wanted a quick and easy set of marketing emails, ChatGPT would be a great option, but if I wanted advanced marketing approaches I would look to a well-educated human.

On the academic front, there is an arguable use for AI. If a professor teaches a subject poorly, ChatGPT could make a half-decent stand-in. The textbook and class notes will still be used, but the simple overview provided by generative AI can make or break when a teacher fails to cover the content. There is also minimal danger to some AI-generated assignments. (This point is entirely hypothetical. I am by no means advising the use of AI in unethical manners for assignments). If the meager performance of AI is good enough to get me a good grade, and I would have learned little from completing the assignment, then there is no danger in letting ChatGPT handle it. I would like to say that busy work that teaches us little is uncommon in the academic world, but I think we all know that is not true. A research paper that requires eight sources and an APA bibliography should be done with a human brain, but a flawed discussion board that gives full credit for any understandable text and inherently limits real discussion is an entirely different story. 

I think a more accurate perspective is that generative AI has opened up a new field of what is possible, and it will continue to, but this is not the end of humanity. It is not the beginning of some dark age where humans are incapable of learning and AI has become the go-to for written communication. It is simply another step in technological advancement. Some jobs may disappear, and academia will have to adapt much like it already is, but this is not the first time. The word “computer” used to refer to a profession, not a device. And much like how mathematicians took the backseat to the computer, I am sure some professions may fade away, but humanity will adapt and correct. Technology is constantly evolving, and society changes wildly with it, but the end of the world we know is not the end of the world.