Artificial intelligence.
Artificial intelligence.

Get Smart: The Berkman Klein Center Takes On Artificial Intelligence

The Berkman Klein Center for Internet and Society at Harvard Law School will be working with the MIT Media Lab, the Knight Foundation, and others to start the Ethics and Governance of Artificial Intelligence Fund, which will support interdisciplinary research on bias and ethics in AI.
By Drew C. Pendergrass

Urs Gasser, director of the Berkman Klein Center for Internet and Society at Harvard Law School, is not worried about artificially intelligent deathbots.

“We at Berkman Klein are less focused on the longer-term risks of ‘Big AI,’” he says, referring to the human-like intelligent systems seen in sci-fi movies. “[We are] more concerned about autonomous systems, algorithms, and other technologies that have an important effect on people’s lives right now.”

Smart, complex programs are already having big impacts on society: Algorithms help courts set bail, determine which news stories appear on Facebook users’ feeds, and sometimes decide who will be given a line of credit from a bank.

Gasser says he is concerned about the ethics of these practices. “Are judges and banks using algorithms that treat minorities unfairly?” he asks. “Is AI on social media elevating fake news in ways that are reshaping our democracy and civic dialogue?”

Yunhan Xu ’17, a research assistant at the Berkman Klein Center, also studies these questions. “One popular misconception is that if it’s an algorithm, then it’s unbiased—it has some kind of inherent objectivity,” she says. “Algorithms are not neutral. They are maximizing parameters that were chosen by the people that designed [them].”

Gasser announced last month that the Center will be working with the MIT Media Lab, the Knight Foundation, and others to start the Ethics and Governance of Artificial Intelligence Fund, which will support interdisciplinary research on bias and ethics in AI.

Many in the industry say this work has never been more necessary. “Often [AI] developments are occurring in the absence of broader conversations about societal impact, discrimination, fairness, and inclusion,” Gasser says. “Expertise in connecting diverse perspectives, disciplines, and sectors from around the world is something that is very much needed in the AI space right now.”

Xu, who will work for Google after graduation, says she is excited about supporting this research. “The discussion about the ethics of AI has been outpaced by the actual development of the technology,” she says. “A sustainable forum for these discussions is key to assuring the public and the users of this technology that it will be done in a responsible way that benefits everyone.”

Increasingly, conversations about AI are happening outside of the Computer Science Department in spaces like the Institute of Politics, which hosted a talk last week called “Artificial Intelligence and Bias.” At the event, three professors from a variety of disciplines spoke about AI, its potential, and its risks.

“I fall into the category of people who are actually quite optimistic about the potential of the machine to help us overcome some of these human biases,” says Iris Bohnet, a speaker at the IOP event and director of the Women and Public Policy Program at the Harvard Kennedy School.

Bohnet, like Gasser, is one of an increasing number of social scientists who research the impact of technology on society. She has spent much of her career researching human bias in the workplace, and says she thinks sophisticated technology can do even more to promote equality.

“There is a real and urgent need to create interfaces where dialogues can occur between engineers, computer scientists, and developers on one side,” says Gasser, “and social scientists, lawyers, policy experts, and the humanities on the other side.”

Tags
Harvard Law SchoolTechnologyMITConversations