Amid Boston Overdose Crisis, a Pair of Harvard Students Are Bringing Narcan to the Red Line
At First Cambridge City Council Election Forum, Candidates Clash Over Building Emissions
Harvard’s Updated Sustainability Plan Garners Optimistic Responses from Student Climate Activists
‘Sunroof’ Singer Nicky Youre Lights Up Harvard Yard at Crimson Jam
‘The Architect of the Whole Plan’: Harvard Law Graduate Ken Chesebro’s Path to Jan. 6
Technology never had a bad rap. Technology was Marx’s revolutionary weapon of choice, Keynes’s liberation from labor, and Solow’s panacea to national growth. As a Generation Z-er, I understood technology as dogma—to be against technology was to be against the world as we left it, knew it, and foresaw it. Technology equated itself to innovation, which in a simple narrative arc implied progress. Progress meant longer lives, more access to information, and greater freedoms of association. We watched documentaries on the Human Genome Project while scoffing at apocalyptic representations of its Gattaca-style implications; we embraced big data as new armor to confront the black box of our world. We fake-read Terms of Agreements, privacy settings, delineations of risk, because progress also meant rapidity. Social media sites flew in and out of style. Phones became large, then small, then large again. Scientists even claimed that technology reshaped our brains, which reshaped technology in a continuous feedback loop. As students, we bought into this vision of technological fortitude excessively—counselors, parents, and peers told us to major in technology-related fields precisely because we had no clue where technology would be by the time we graduated.
The previous statements seem outdated, and indeed, 2018 is a decidedly morally ambiguous year for technology. I would posit that before Cambridge Analytica’s perpetual newsfeed coverage, before Mark Zuckerberg stated that Facebook committed a “major breach of trust,” and before Steven K. Bannon attempted to make Crocs cool (metaphorically), the setting for such fear was placed prettily. Years ago, Ellen Pao destroyed the perception of Silicon Valley as an innocuous tech geek haven in her widely publicized workplace sexual harassment case. Last year, credit reporting firm Equifax reported a data leak involving sensitive financial information of 145.5 million Americans; two weeks ago, Saks and Lord & Taylor was hacked, leaking card information of customers.
Here, magnitude is the distinguishing factor. 8.2 million people watched the first season of Netflix series “Stranger Things” about a chemistry lab that kidnapped, trafficked, and abused children with psychokinetic powers; “Black Mirror,” an Emmy-winning show examining the unintentional consequences of new technology such as rating systems and streaming sites, premiered its fourth season during Christmas break. In January, the possibility of a cryptocurrency boom placed the potential for a decentralized and unregulated monetary system, and thus a destabilized social response, on the table. Though many did not discuss cryptocurrency in everyday conversation outside of its get-rich-quick potential, advocates gushed on about the circulation’s existence outside of legislation.
Cryptocurrency’s philosophical appeal was anti-establishment. But perhaps the unapologetic rush of businesses, investors, and entrepreneurs to become the new establishment created its lukewarm ideological perception. In these examples, technology asserts itself as the connecting factor between private objectives and public outcomes, establishment problems and anti-establishment solutions, deregulated presents and new regime futures. By playing multiple foundational institutions against each other—most notably, business and government—technology is regarded as an institution itself, capable without intention of obstructing human life as it should have ideally proceeded.
Any conversation involving an institutional thematic—such as government, business, or religion—can be expected to include high doses of skepticism in a post-Obama and post-recession political setting. The conversation on technology is no exception. The rise of Jordan B. Peterson as a Canadian right figure on the basis of ideologically underwhelming principles like “Be precise in your speech” may be symptomatic of a public yearning for life-sized doctrines to abide by. A technological equivalent would be an individual call to retreat from social media platforms and to enforce personal privacy settings. We hope that such common-sense retorts will shape a functional response to fear—but we also know this response is too defeatist and too regressive for a world that develops regardless of our comfort. Technology is a tool, not a doctrine. Thus the conversation should surround the tool’s use, not how we respond to its shadow.
The outrage against Cambridge Analytica—insistence on manipulation, lack of privacy, and disregard of consent—is catalyzed and reinforced by residual feelings of liberal guilt, anger, and disbelief regarding 2016 election results. These feelings are reflected through language choices, including terms such as “psychological warfare,” hack[ing] Facebook, and “cyberwarfare for elections.” But analogous to 2016 election results, the outcome was unfavorable but fairly obtained. Cambridge Analytica worked, as any other firm could have worked, within the bounds of Facebook itself. Facebook’s impartiality, a fine virtue for a business intent on information distribution and network formation, was its crime.
Perhaps the outrage, the insistence on a regulation breach, is a form of liberal self-absolution. By claiming the battleground was rigged, we allow ourselves to place blame for political outcomes externally. But the Cambridge Analytica model targeted already vulnerable people. Regardless of political tactic, we liberals must contend with the ways in which our political interaction with broader society alienated the electorate sourcing Cambridge Analytica’s model’s success, causing some to vote even against their own rational interests. The existence of conservative technological tactics does not disqualify the root cause of political transformation. The questions we should contend with from this case are less partisan. What price do consumers pay for free association? What is the process of creating ethics around technology? In what ways do we use tools, and in what ways do tools use us? These concerns address the nature of consent, public use, and information in a rapidly changing world, which affects all involved, not just those who are politically unhappy.
Christina M. Qiu ’19 is an Applied Mathematics concentrator in Mather House. Her column appears on alternate Mondays.
Want to keep up with breaking news? Subscribe to our email newsletter.