Amid Boston Overdose Crisis, a Pair of Harvard Students Are Bringing Narcan to the Red Line
At First Cambridge City Council Election Forum, Candidates Clash Over Building Emissions
Harvard’s Updated Sustainability Plan Garners Optimistic Responses from Student Climate Activists
‘Sunroof’ Singer Nicky Youre Lights Up Harvard Yard at Crimson Jam
‘The Architect of the Whole Plan’: Harvard Law Graduate Ken Chesebro’s Path to Jan. 6
Nearly half of surveyed Harvard faculty — 47 percent — believe that artificial intelligence will have a negative impact on higher education, according to The Crimson’s annual survey of the Faculty of Arts and Sciences.
While less than a quarter of respondents — 21 percent — said they believe AI will have a positive impact, levels of optimism about AI varied widely by division, with faculty respondents in STEM fields appearing less pessimistic about AI’s effects than those in the social sciences or humanistic disciplines.
About 37 percent of faculty respondents in the FAS Science division, which includes math, physics, and natural sciences, said they felt AI will have a negative impact on higher education, and only 17 percent of respondents from the School of Engineering and Applied Sciences said the same.
In contrast, nearly 60 percent of faculty respondents from the Arts and Humanities division said they felt AI will have a negative impact, as did approximately 48 percent of respondents from the Social Sciences division.
The Crimson distributed its survey to more than 1,300 members of the FAS and the School of Engineering and Applied Sciences, including tenured and tenure-track professors, non-tenure-track lecturers, and preceptors. The survey collected demographic information and opinions on a range of topics, including Harvard’s academic atmosphere, life as a professor, and political issues.
The anonymous 124-question survey received 386 responses, including 234 fully-completed responses and 152 partially-completed responses. It was open to new responses between March 23 and April 14. Responses were not adjusted for selection bias.
This fifth and final installment in The Crimson’s analysis of the survey focuses on faculty opinions regarding AI, including integrating AI into their curricula and its impact on higher education.
Despite concerns over the inappropriate use of AI in the classroom, more than half of surveyed faculty — nearly 57 percent — said they did not have an explicit or written policy on the use of AI tools like ChatGPT in their courses when the survey was circulated this spring. Approximately 20 percent of faculty respondents say they entirely prohibit AI usage.
Nearly 11 percent of surveyed faculty permit the use of AI with some restrictions, and approximately 9 percent said they permit it with strong restrictions. Only 3 percent of faculty respondents said they entirely permit the use of AI in their courses.
In separate interviews, several faculty members expanded on their approach to AI in the classroom.
Stephen Chaudoin, an assistant professor of Government who teaches the introductory course Gov 40: “International Conflict and Cooperation in the Modern World,” said that he has not attempted to ban AI tools in class, calling such efforts “futile.”
“I usually try to be transparent with students and tell them I’ve tried it, and it was about B-plus, B-minus work,” Chaudoin said.
In the introductory Economics course Econ 10: “Principles of Economics,” policies for ChatGPT usage are being developed, according to David C. Martin, an economics lecturer and the course’s head section leader.
David J. Malan ’99, who teaches the wildly popular Computer Science 50: “Introduction to Computer Science,” recently announced that he would be integrating AI tools into course instruction, writing in an emailed statement that the course has long relied on software to support teaching and that AI usage was “an evolution of that tradition.”
But Malan added that current tools like ChatGPT are “too helpful,” writing that CS50’s AI features will lead students toward how to solve problems rather than code for them.
More than half of respondents — approximately 51 percent — said they do not believe they have received AI-generated work. Still, only about 31 percent of faculty surveyed said they felt confident they could distinguish between work produced by a student and work produced by AI, with nearly 46 percent reporting they did not feel confident in their ability to make that distinction.
In an open-response question asking faculty respondents to elaborate on their plans to adapt their teaching, research, and scholarship in the age of AI, many faculty said they did not have a plan yet or did not know how to proceed.
“I don’t have a plan, and would appreciate guidance,” one faculty member wrote.
Some faculty members compared the rise of AI to the impact of other technological developments in academia, such as calculators.
One faculty member wrote that they planned to “adopt the use of AI and treat it as a useful tool.”
“Just like how calculators changed math education (or we hope that it has), AI tools should allow us to focus on other types of work and thinking that is more uniquely human,” they added.
Chaudoin, the Government professor, said that while ChatGPT can “somewhat effectively” categorize data, it is not especially talented at writing academic articles.
“ChatGPT can mix up a really good, big bowl of word salad,” he said. “But it can’t write an especially good political science article yet.”
Several faculty wrote they would shift to more in-class based assignments as a result of AI proliferation.
But not all faculty are shifting the nature of their assignments: Michael Bronski, a professor of the practice in Women, Gender, and Sexuality Studies said in a separate interview that his assignments simply can’t be done effectively with AI.
“The challenge of AI for written work for most everybody is to actually come up with projects that cannot be done very well with AI,” Bronski said.
David C. Bell, a professor of the practice at SEAS, also said in an interview that he would not be changing the nature of his assignments.
“I tried answering my questions with AI, and it failed miserably,” he said.
The Crimson’s annual faculty survey for 2023 was conducted via Qualtrics, an online survey platform. The survey was open from March 23, 2023, to April 14, 2023.
A link to the anonymous survey was sent to 1,310 FAS and SEAS faculty members through emails sourced in February 2021 from Harvard directory information and updated in subsequent years. The pool included individuals on Harvard’s Connections database with FAS affiliations, including tenured, tenure-track, and non-tenure-track faculty.
In total, 386 faculty replied, with 234 filling the survey completely and 152 partially completing the survey.
To check for response bias, The Crimson compared respondents’ self-reported demographic data with publicly available data on FAS faculty demographics for the 2021-22 academic year. Survey respondents’ demographic data generally match these publicly available data.
In The Crimson’s survey, 47 percent of respondents identified themselves as male and 45 percent as female, with 2 percent selecting “genderqueer/non-binary,” 1 percent for “other,” and 5 percent for “prefer not to say.” According to the Faculty of Arts and Sciences’ 2022 Report, 39 percent of FAS faculty as a whole are female.
53 percent of respondents to The Crimson’s survey were tenured or tenure-track faculty and 47 percent were non-tenure-track faculty. According to the FAS data, 58 percent of faculty are tenure-track and 38 percent are non-tenure-track.
31 percent of survey respondents reported their ethnic or racial background as something other than white or Caucasian, with 9 percent opting not to report their race. According to the FAS data, 27 percent of faculty are non-white.
—Staff writer Rahem D. Hamid can be reached at firstname.lastname@example.org.
Want to keep up with breaking news? Subscribe to our email newsletter.