Harvard Kicks Off Search for Dean of School of Engineering and Applied Sciences
Harvard Misinformation Expert Joan Donovan Forced to Leave by Kennedy School Dean, Sources Say
Sports Reporter and Former Harvard Crimson Editor Gwen Knapp ’83 Dies at 61
Harvard IOP Spring 2023 Resident Fellows Discuss Political Polarization at Inaugural Forum
Brenda Tindal Appointed Harvard FAS Inaugural Chief Campus Curator
As school districts across the country rush to ban ChatGPT, a free-to-use AI chatbot developed by OpenAI, other education stakeholders including Salman Khan, founder of the well-known free internet learning platform Khan Academy, have expressed concerns regarding such blanket bans.
Speaking at a webinar at the Graduate School of Education, Khan said that banning ChatGPT completely is the “wrong approach,” calling it “transformative” for the future of education. We agree with Khan to the extent that a blanket ban on ChatGPT does not suffice.
As a board, we are organized around and greatly cherish the principles of deep engagement, critical thinking, discourse, and writing — principles whose importance is seriously threatened by the development of AI. At first glance, ChatGPT mimics these virtues. It often writes well, can generate a logical analytical essay faster than a human can title their Word document and even crack jokes. However, the writing and pseudo-thinking that ChatGPT does, even when the product is nearly identical to human handiwork, loses something essential.
Writing is more than arranging words on the page in order to convey information. Writing is the physical manifestation of human thought — everso essential to critical thinking and intellectual discovery. It is an arena in which vague, embryonic notions transform into precise, developed ideas through bursts of prosaic creativity. The process of writing is just as important as the end result, as many of the ideas that enrich our work are only developed through the mundane endeavor of articulation.
If we outsource to ChatGPT, we won’t simply be shipping away the busy work like pulling quotes and smoothing out paragraph transitions — we will also be mutilating our own generative abilities and atrophying our creative muscles, contrary to the skill augmenting case often made for ChatGPT.
Our second area of concern, although not unrelated to the first, has to do with the current lack of response from the Harvard administration regarding this technological development. Right now, ChatGPT appears relatively innocuous — a source of novel meme content or a headache for instructors, at most. But given Moore’s law, which posits that computing power will double every two years, we can only assume that the speed and capability of AI applications will increase in the coming years — and one thing that we can be sure of right now is that Harvard, having released no College-wide policy on AI in the classroom, appears to be woefully unprepared.
The age of AI is upon us; we have no time to lose. Harvard should begin an intense evaluation of the manifold ways in which the development of AI can be channeled towards greater human flourishing without compromising the holistic cultivation of students as thinkers. On a pedagogical level, Harvard must immediately establish a working group on the role of AI tools in pedagogy, with an eye towards the implementation of methods like long-term projects, in-class work and discussions that are relatively insulated from AI tools and can be can be applied to all educational levels from elementary to graduate school. Moreover, finding domains in which AI can offer genuine skill augmentation without shortchanging users of learning will be important as further applications are developed.
Harvard should also fund development of more robust tools to detect the residual “fingerprints” left by the use of AI tools so that instructors may implement individual classroom policies that, while respecting a University-wide policy of not outright banning AI tools, are ultimately commensurate with the nature and goals of their respective courses.
As we have previously argued, the prospect of Artificial General Intelligence whose values do not align with that of humans may represent an existential risk to humanity. AI development as a field is not immune to the profit incentivizing nature of corporate America, which may compel companies to roll out AI products as fast as they can without thoroughly considering potential consequences. There is still limited understanding of the intricacies of how AI models make decisions, and as such, AI research should advance with precaution and the highest standards of ethics. Funding for such research should be independent and not exclusively stem from donors with pecuniary incentives related to the success of AI products.
To many of us, AI still remains a nebulous concept, and mis-education regarding its future abounds. Nonetheless, we are confident that a blanket ban on ChatGPT would not advance education at Harvard. Schools that are too trigger-happy with bans on AI will run the risk of placing their students at a serious educational and competitive disadvantage, particularly relative to other countries that may be more receptive to AI than our own. Education must prepare young people for the future — and the future seems to involve a great deal of ChatGPT.
This staff editorial solely represents the majority view of The Crimson Editorial Board. It is the product of discussions at regular Editorial Board meetings. In order to ensure the impartiality of our journalism, Crimson editors who choose to opine and vote at these meetings are not involved in the reporting of articles on similar topics.
Want to keep up with breaking news? Subscribe to our email newsletter.