News

Pro-Palestine Encampment Represents First Major Test for Harvard President Alan Garber

News

Israeli PM Benjamin Netanyahu Condemns Antisemitism at U.S. Colleges Amid Encampment at Harvard

News

‘A Joke’: Nikole Hannah-Jones Says Harvard Should Spend More on Legacy of Slavery Initiative

News

Massachusetts ACLU Demands Harvard Reinstate PSC in Letter

News

LIVE UPDATES: Pro-Palestine Protesters Begin Encampment in Harvard Yard

Panel Experts Talk Artificial Intelligence Ethics

Sheila Jasanoff, Alex Pentland, Cynthia Dwork, and Iris Bohnet discuss artificial intelligence at the Institute of Politics on Thursday.
Sheila Jasanoff, Alex Pentland, Cynthia Dwork, and Iris Bohnet discuss artificial intelligence at the Institute of Politics on Thursday. By Margaret F. Ross
By Harshita Gupta, Crimson Staff Writer

Computer science and public policy experts from Harvard and MIT spoke at a Kennedy School forum Thursday night about the potential of eliminating human bias and errors in policy-making through artificial intelligence.

At an event called “Artificial Intelligence & Bias: Past, Present & Future,” moderator and Kennedy School professor Sheila Jasanoff asked panelists about the limitations and intricacies of employing artificial intelligence systems. Artificial intelligence is beginning to rival human decision-making for recruiting employees, determining loan eligibility, and steering automobiles.

Iris Bohnet, a panelist and public policy professor at the Kennedy School, said she feels optimistic about the potential of machines to help overcome human biases against race and gender. She shared her research about human bias in job application processes, where some employers can discriminate against applicants with criminal records.

“I definitely fall into the camp of people who believe that humans are biased,” Bohnet said. “There is quite a bit of evidence suggesting that it is extremely hard to de-bias our minds, and that it might actually be more promising to… redesign how we work and how we learn, to address this.”

Computer Science professor Cynthia Dwork noted that algorithms can exhibit bias as well. Dwork cited the “stable matching algorithm,” which is used to match medical students with residency hospitals, and said the algorithm could be run either to prioritize students’ or hospitals’ preferences.

“This is an example of an algorithm that has a bias, and the person running the algorithm can choose which direction the bias will go in,” Dwork said. “The fact that something is an algorithm doesn’t make it fair.”

During the event, an audience member noted that for many, algorithms are incomprehensible and like a “black box.” The attendee asked how algorithms can be held “accountable” so that members of the public can better understand decision-making.

Panelist and MIT Professor Alex Pentland said he doubts people will ever fully understand how algorithms work.

“I don’t think it is possible for these algorithms to be comprehensible in the way people think about what it means to ‘understand’,” Petland said. “I do think it’s possible to look at what they do and debate whether that’s what you want to have happen, using the performance along different dimensions, and using statistical analysis.”

Panelists all agreed that experts across disciplines should agree upon goals for algorithms to achieve, such as eliminating biases.

Nevertheless, Pentland said he thought algorithms and artificial intelligence should continue to be improved upon as society changes.

“But I actually think the important element is that any of these algorithms, like any law, should be continually evaluated, because the world changes,” Pentland said. “One wants to have continual oversight, to make sure that it’s doing what you intend for it to do.”

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
IOPHarvard Kennedy SchoolScienceComputer Science