News

Pro-Palestine Encampment Represents First Major Test for Harvard President Alan Garber

News

Israeli PM Benjamin Netanyahu Condemns Antisemitism at U.S. Colleges Amid Encampment at Harvard

News

‘A Joke’: Nikole Hannah-Jones Says Harvard Should Spend More on Legacy of Slavery Initiative

News

Massachusetts ACLU Demands Harvard Reinstate PSC in Letter

News

LIVE UPDATES: Pro-Palestine Protesters Begin Encampment in Harvard Yard

Editorials

To ChatGPT or Not to ChatGPT?

University Hall, located in Harvard Yard, houses the offices of top Faculty of Arts and Sciences administrators.
University Hall, located in Harvard Yard, houses the offices of top Faculty of Arts and Sciences administrators. By Frank S. Zhou
By The Crimson Editorial Board
This staff editorial solely represents the majority view of The Crimson Editorial Board. It is the product of discussions at regular Editorial Board meetings. In order to ensure the impartiality of our journalism, Crimson editors who choose to opine and vote at these meetings are not involved in the reporting of articles on similar topics.

In response to the meteoric rise of ChatGPT, Harvard’s Faculty of Arts and Sciences released its first guidance on the use of generative AI in courses over the summer. In these guidelines, FAS advises instructors to explicitly communicate AI policies to students, offering three potential approaches ranging from total restriction to total allowance.

Considering that only a year ago, most of us hadn’t even heard of ChatGPT, these new guidelines are sorely needed — and, in the perspective of our Editorial Board, well-designed to meet the moment. In particular, we commend the laissez-faire approach the University has taken with regards to generative AI regulation, leaving most decisions about its use up to the discretion of individual instructors.

As this Board has recognized, AI isn’t going anywhere; Harvard’s public guidance for the use of generative AI recognizes this fact while allowing professors the autonomy to decide whether and how they will adopt this technology in their courses. In turn, this latitude allows the diverse set of departments and courses at Harvard to tailor generative AI policies to their particular learning environment. (We’d wager the best AI pedagogy for Chem 245: Quantum Chemistry would not well serve Humanities 10: A Humanities Colloquium).

In general, we support leaving difficult ethical questions, especially about pedagogy, to the smart, qualified students and teachers on this campus. Leaving these decisions to the people they most directly affect — rather than distant Smith Center bureaucrats — builds trust, creates buy-in, and enables feedback-oriented iteration.

Harvard has given instructors a choice — now, it is incumbent on them that they choose well. We hope faculty avail themselves of Harvard’s pedagogical resources regarding generative AI, such as those from the Derek Bok Center for Teaching and Learning, and engage critically with news and research about this emerging technology.

Discussing pedagogical choices, however, should not detract attention from the most important choices: those that students will make, day in and day out, about whether and how to follow course policies.

In truth, clever students can use generative AI without being caught. AI detection software is not presently reliable, and the FAS discourages professors from using it. While generative AI offers students the chance to increase their productivity and creativity, when used less conscientiously, it provides students an avenue to circumvent the skill-building nature of their assignments.

As a board premised on the importance of writing to critical thinking, we believe in limiting the role of AI in the writing process. Additionally, students should remember that AI is not a failsafe: It can hallucinate events that did not happen and contains biases absorbed from the human data on which it trains.

Nevertheless, there exist a number of genuinely skill-augmenting purposes for generative AI, such as using it to distinguish between apparent synonyms in a foreign language, to understand software code, and to summarize difficult, intuitive concepts in proof-based math.

Ultimately, generative AI is here to stay, and we commend Harvard’s approach to managing this reality. From flexible guidelines on its use in class, to an “AI Sandbox” tool which will allow affiliates to experiment with generative AI without worrying about the confidentiality of the data they input, to GENED offerings that allow students to expand their knowledge of this dynamic field, Harvard’s current efforts maximize the power and potential of one thing generative AI cannot impede upon: user choice.

But just as Harvard has given instructors the ability to choose the degree of AI integration into their courses, we must also critically reflect on that choice in our own lives. As generative AI becomes ever more present at Harvard, we should, as students, remember: Pedagogy matters, but ultimately what we learn — and how — falls to us.

This staff editorial solely represents the majority view of The Crimson Editorial Board. It is the product of discussions at regular Editorial Board meetings. In order to ensure the impartiality of our journalism, Crimson editors who choose to opine and vote at these meetings are not involved in the reporting of articles on similar topics.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
Editorials