Panel Experts Talk Artificial Intelligence Ethics

IOP Artificial Intelligence Panel
Sheila Jasanoff, Alex Pentland, Cynthia Dwork, and Iris Bohnet discuss artificial intelligence at the Institute of Politics on Thursday.
Computer science and public policy experts from Harvard and MIT spoke at a Kennedy School forum Thursday night about the potential of eliminating human bias and errors in policy-making through artificial intelligence.

At an event called “Artificial Intelligence & Bias: Past, Present & Future,” moderator and Kennedy School professor Sheila Jasanoff asked panelists about the limitations and intricacies of employing artificial intelligence systems. Artificial intelligence is beginning to rival human decision-making for recruiting employees, determining loan eligibility, and steering automobiles.

Iris Bohnet, a panelist and public policy professor at the Kennedy School, said she feels optimistic about the potential of machines to help overcome human biases against race and gender. She shared her research about human bias in job application processes, where some employers can discriminate against applicants with criminal records.

“I definitely fall into the camp of people who believe that humans are biased,” Bohnet said. “There is quite a bit of evidence suggesting that it is extremely hard to de-bias our minds, and that it might actually be more promising to… redesign how we work and how we learn, to address this.”

Computer Science professor Cynthia Dwork noted that algorithms can exhibit bias as well. Dwork cited the “stable matching algorithm,” which is used to match medical students with residency hospitals, and said the algorithm could be run either to prioritize students’ or hospitals’ preferences.

“This is an example of an algorithm that has a bias, and the person running the algorithm can choose which direction the bias will go in,” Dwork said. “The fact that something is an algorithm doesn’t make it fair.”

During the event, an audience member noted that for many, algorithms are incomprehensible and like a “black box.” The attendee asked how algorithms can be held “accountable” so that members of the public can better understand decision-making.

Panelist and MIT Professor Alex Pentland said he doubts people will ever fully understand how algorithms work.

“I don’t think it is possible for these algorithms to be comprehensible in the way people think about what it means to ‘understand’,” Petland said. “I do think it’s possible to look at what they do and debate whether that’s what you want to have happen, using the performance along different dimensions, and using statistical analysis.”

Panelists all agreed that experts across disciplines should agree upon goals for algorithms to achieve, such as eliminating biases.

Nevertheless, Pentland said he thought algorithms and artificial intelligence should continue to be improved upon as society changes.

“But I actually think the important element is that any of these algorithms, like any law, should be continually evaluated, because the world changes,” Pentland said. “One wants to have continual oversight, to make sure that it’s doing what you intend for it to do.”


—Crimson Staff Writer Harshita Gupta can be reached at harshita.gupta@thecrimson.com. Follow her on twitter @harshitagupta_.

Tags