News

Cambridge Residents Slam Council Proposal to Delay Bike Lane Construction

News

‘Gender-Affirming Slay Fest’: Harvard College QSA Hosts Annual Queer Prom

News

‘Not Being Nerds’: Harvard Students Dance to Tinashe at Yardfest

News

Wrongful Death Trial Against CAMHS Employee Over 2015 Student Suicide To Begin Tuesday

News

Cornel West, Harvard Affiliates Call for University to Divest from ‘Israeli Apartheid’ at Rally

Biden National Security Official Discusses AI and Cybersecurity at Harvard Law School Talk

Anne Neuberger speaks at a Harvard Law School event. Neuberger discussed the U.S. government's efforts to combat cybersecurity challenges in an era of artificial intelligence.
Anne Neuberger speaks at a Harvard Law School event. Neuberger discussed the U.S. government's efforts to combat cybersecurity challenges in an era of artificial intelligence. By Erick Contreras-Rodriguez
By Aisatu J. Nakoulima, Kelly A. Olmos, and Grace E. Yoon, Contributing Writers

Deputy National Security Adviser Anne Neuberger discussed how the U.S. government is combating cybersecurity challenges amidst the emergence of artificial intelligence technology at a Harvard Law School talk on Tuesday.

During the event, which was moderated by HLS professor Jonathan Zittrain, Neuberger identified what she believed to be the nation’s greatest digital security issues and discussed how the government, companies, and consumers can work to increase the safety of the digital world.

In particular, Neuberger addressed the implications of an October 2023 executive order from U.S. President Joe Biden, which established standards for the responsible and safe use of AI.

“The key thing that President Biden really wanted us to grapple with was the promise and the peril,” Neuberger said.

Neuberger said that while countries in the European Union are “very focused on the risks, less on the promotion,” the United States’ approach to AI under the Biden administration has taken the philosophy of “let’s use the tech, let’s be leaders in global innovation.”

As an example, Neuberger pointed to AI’s advancements in clinical drug discovery trials and classroom education, saying that these developments show “clear promise” of the potential for AI to have a positive impact on society.

While touting the potential benefits of AI, Neuberger also highlighted the importance of tackling the “peril” of the technology early in its development.

She said that Biden’s October 2023 executive order sought to address potential risks of AI “instead of trying to layer on the security and safety and trust components afterwards, when it’s far harder.”

Zittrain asked about the “perils” of AI that have yet to be resolved, such as the security implications of AI technology like deepfakes, which are fake, digitally manufactured media content, including images, videos, or audio recordings depicting the likeness of someone else. In particular, he pressed Neuberger about the possibility of deepfakes disrupting the 2024 U.S. presidential election.

Neuberger conceded that the possibility of deepfakes undermining an electoral process is “a key question that governments around the world are grappling with.”

“It is a hard problem,” she added. “And it is one that from a government perspective, we focus on ensuring that any intelligence regarding foreign disinformation is rapidly passed via law enforcement to social media companies.”

Still, Neuberger said that the challenges of deepfakes also presents an opportunity “for the private sector to step up, because it’s a particularly fraught area for governments.”

Before the end of her discussion, Neuberger highlighted another opportunity for improvement in the field of cybersecurity: trusted digital identities.

“The U.S. is far behind in terms of having trusted digital identities that people can use to access their medical records or veterans can use to access their government records,” Neuberger said, adding that the development of digital identities is currently in progress.

Neuberger said that she is hopeful for additional executive action “to lay out the standard for how cryptographic IDs, like a digital license, could be used to authenticate online.”

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
PoliticsHarvard Law SchoolBiden