News

Cambridge Developer Sues City Over Inclusionary Zoning Policy

News

Immigration Authorities Arrest HLS Visiting Professor After October Shooting Incident

News

The ‘Eyes and Ears’ Behind Harvard’s State Lobbying

News

Humanities Departments Are in Trouble, but ‘Applied Humanities’ Are in Demand, Harvard Panelists Say

News

Harvard Endowment Appoints 3 New Directors, Including JPMorgan Exec Who Managed Epstein’s Bank Accounts

Op Eds

The Problem With Psets

By Barbara A. Sheehan
By Andrew W. Shlomchik, Contributing Opinion Writer
Andrew W. Shlomchik ’29, a Crimson Editorial comper, lives in Greenough Hall.

Never before has it been so easy to cheat on problem sets — so why are we still grading them?

While cheating on problem sets is not new at Harvard, modern artificial intelligence tools have become particularly good at solving the kinds of questions Harvard students encounter on the take-home psets assigned by many STEM and economics classes.

Even the least savvy AI users among us can now get away with using AI on their psets, and many students likely already do. According to a 2024 report commissioned by the Harvard Undergraduate Association, nearly 90 percent of students use generative AI. The notion that all of these students partake only in kosher uses of AI is rather implausible.

For the sake of fairness, then, psets should be graded on completion alone, or not graded at all.

The ostensible purpose of psets is to help students practice and master course material. Making errors is a natural and important part of that process. But Harvard students — however festooned with Ivy — are just people. People respond to incentives and as long as psets are graded on accuracy, there will remain a strong incentive to cheat. After all, why spend hours toiling in Lamont Library, bearing the risk of losing points, while your AI-enabled classmates are off fashioning slides for their consulting club?

A better approach to psets might be to post both problems and solutions from the get-go, allowing students to receive immediate feedback for wrong answers instead of waiting until solutions are released. It is then the prerogative of the student to complete the problem set — if they feel they need it — and to check their own answers. Why cheat on the pset if it isn’t being graded? You’ll only hurt your chances on the next exam.

To anyone that would decry such a change as a blow to academic rigor — something that Harvard is particularly concerned about at the moment — I’d like to suggest that grading psets for completion would, if anything, be more academically rigorous. The student accepts responsibility for their own education. Why should Harvard care how many practice problems they attempt when everything will come out in the wash of exams?

We ought to reserve grading on accuracy for in-person examinations where cheating can be prevented. Exams aren’t perfect, but they’re a fairer way to assess abilities than take-home assessments.

Some may hold that weighing exams so heavily is unfair, arguing that students who are not natural test-takers would be at a disadvantage. That might be true. But is it fair that an honest student loses points for wrong answers on a pset, while their dishonest peers get full credit for ChatGPT-ed answers?

But I agree that some sort of effort-based grade buffer is desirable to mitigate the impact of a bad exam. One approach would be to grade on completion. Professors could give credit for work that shows evidence that the student earnestly attempted the problem set. Completion credit, perhaps counting for 10 percent of a student’s grade, would reduce the weight of exams and provide a fairer incentive to do the pset.

Still, for some classes, like ESE 6: Introduction to Environmental Science and Engineering, problem sets are an end in themselves, assessing skill sets that aren’t on exams, such as downloading and manipulating air pollution datasets from the EPA.

We shouldn’t stop grading these kinds of psets on accuracy because their purpose extends beyond simply practicing in advance of a test.

Others might suggest that students need psets to keep them on track during a course. Even if it were true that grading psets on accuracy and collecting them throughout the semester led students to adopt better study habits and score better on exams, that doesn’t get rid of the effective penalty for those who choose not to cheat. Fairness, not pedagogical efficacy, is the primary issue at hand.

And anyways, in a world devoid of trust, I have faith alone in the will of Harvard students to get an A.

Andrew W. Shlomchik ’29, a Crimson Editorial comper, lives in Greenough Hall.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
Op Eds