Lieber Prepares for Impending Trial on Federal Charges As He Battles Incurable Cancer
Harvard Affiliate Claims HUPD Log Inaccurately Represents His Detainment
‘We Never Endorsed This’: Student Advocates Question Harvard’s Decision to Merge Title IX and OSAPR Offices
Harvard To Experiment With Permanent Remote Work Flexibility for Some Employees, Bacow Tells Faculty
Harvard College Accepts Record-Low 3.43% of Applicants to Class of 2025
Health and technology experts discussed the advantages and pitfalls of deploying artificial intelligence to improve health care equity at a School of Engineering and Applied Sciences panel Wednesday.
The event, hosted by the Center for Research on Computation and Society, featured talks by Heather Mattie, lecturer on biostatistics at the School of Public Health; David S. Jones ’92, a professor of the culture of medicine at the Medical School; and Nathaniel Hendrix, a postdoctoral research fellow at HSPH. Shalmali Joshi, a postdoctoral fellow at the Center for Research on Computation on Society, moderated the discussion.
Mattie opened the event with a discussion on algorithmic bias, which she defined as an algorithm “biased against some group over another.”
She stressed that almost all steps in the “pipeline” of “creating and implementing” an algorithm are subject to bias.
The first area of improvement Mattie addressed was the issue of biased datasets.
“A lot of the genome mapping studies, most of the individuals in those studies were of European descent. They’ve started to try to balance out representation which is great — and so have clinical trials,” she said. “But those are two very big examples of an imbalance in who’s represented in the data and therefore, who we have more information about and can make better predictions for with our algorithms.”
Mattie also emphasized the importance of transparency in choosing an algorithm, and pointed out that many papers “fail or neglect to mention where their algorithm goes wrong.” In the context of health care, Mattie said even small errors can affect “millions of people.”
Also touching on the issue of representative datasets, Jones argued diversity in datasets involves factors such as employment, housing, income, and wealth, rather than simply race and ethnicity.
“If you don’t have all of these measures in your data set, there’s no way you’re going to get an optimal outcome because you’re analyzing an incomplete picture,” he said.
Jones also said he is skeptical of a practice known as race-norming, in which “you conduct a diagnostic test on people, but then use different norms to evaluate deviation from normal for people of different races.” He said this practice has been applied to pulmonary function tests for decades, among others.
“Now, with each of these [tests] there is an empirical basis — you can point to studies showing that the norms are different,” he said, “The question is whether those data are robust, and if the race-based norms reflect some kind of racist structuring of the society.”
For example, physicians have criticized pulmonary function algorithms for assuming lower lung volumes in Black people and normalizing lung damage caused by poor environmental conditions and medical care.
Hendrix followed the other speakers with an analysis of how clinical AI could be useful in determining the impact of certain health treatments.
“It allows us to estimate the clinical impacts of interventions that really have only told us how their performance is,” he said. “We can take, for example, a diagnostic AI, take its sensitivity, include it in a model like this, and estimate its clinical impact.”
While Hendrix presented other benefits of using clinical AI, such as its potential to bring down health care costs by “automating our disease monitoring,” he also cited problematic cases in which researchers do not have enough information on how well AI “agrees with human clinicians.”
Hendrix presented a contrast between two AI tools: one caught more cases overall, while the other caught more cases that physicians would miss.
“We have an AI that catches fewer cases overall, but it catches more of those cases that the clinician does miss,” Hendrix said of the second tool. “Even though it’s a lower performing AI, we might say, it might provide more value.”
Though panelists agreed AI in health care has fallen short in many regards, all said they have hope that if AI is implemented correctly, it can have lasting impacts on the health inequities and the effectiveness of health care.
“The question is: will we invest the resources and do these things well?” Jones said in a post-panel interview.
Want to keep up with breaking news? Subscribe to our email newsletter.