National Opinions

More cybersecurity won’t secure our elections, but privacy protections might

Some have suggested that the best way to defend our elections is to strengthen cybersecurity. While doing so is important, the recent presidential election was the most secure in U.S. history. In practice, many of the most serious dangers to democracy stem from, or are worsened by, social media and online political advertising — no breaches or hacks necessary.

Vast collections of personal data, coupled with precise algorithmic targeting, can easily enable directed misinformation and disinformation campaigns that may dissuade people from voting or even suppress the vote. Federal privacy legislation and greater transparency surrounding tech platforms’ data collection and ad targeting practices could help address these harms and ensure greater integrity and confidence in the electoral system.

In the run-up to the 2020 elections, Internet platforms including Facebook, Google, Reddit and Twitter introduced or expanded content and advertising policies in an attempt to curb the spread and amplification of misleading and harmful information. Some platforms also banned new political ads from running closer to the election or barred political advertising altogether. Nevertheless, before the election, civil rights groups, civil society organizations and policymakers warned that these changes were inadequate, and argued that platforms were struggling to consistently and transparently enforce their rules, old and new.

If these companies’ responses have been lackluster, it’s probably because more robust reactions would undermine their profits. The business model of most Internet platforms relies on targeted advertising, which is fueled by private data. This creates an incentive for companies to amplify highly engaging content to generate advertising revenue — and disinformation perfectly fits the bill.

After 2016, many platforms claimed they were well prepared to tackle future elections, implying that their policies and practices were as strong as possible. However, when misinformation about the coronavirus spread across the internet in early 2020, the platforms responded more robustly than they did with regard to elections, suggesting that they could have been doing much more. For example, for the first time, platforms such as Facebook began using a broad set of labels and notifications to warn users that content they were viewing was misleading, to surface additional information, and even to retroactively inform users who had engaged with misleading information. This response suggested that platforms were willing to leave dubious political content undisturbed to preserve revenue and avoid scrutiny from lawmakers. Even now, in the wake of a clearly settled election, not all of the practices that platforms rolled out to fight coronavirus misinformation have been applied to combat clear misinformation about the democratic process.

Misinformation spreads on social media through user-generated content and advertising. The targeted-advertising business model enables tech platforms to profit directly from misinformation. They collect data about users’ online activity to build detailed profiles of their views and interests to micro-target ads. And because the United States lacks a comprehensive federal privacy law, companies have limited obligations to protect user privacy and users have very little control over their data. This allows advertisers — including political campaigns, PACs and special interest groups — to reach users with unparalleled scale and precision. It also gives companies unfettered access to data that powers a business model that disincentivizes tackling these problems.

In addition, ad targeting and delivery systems shape how every Internet user interacts online, but tech platforms provide little transparency into how these tools actually work, what data-collection practices form the foundations of these systems, and what impact these policies and practices have on users and society at large. This makes it difficult to hold political advertisers — which could spread misinformation, disinformation and other harmful content — and tech companies accountable.

ADVERTISEMENT

This fundamental lack of transparency and accountability around privacy and algorithmic practices has not gone unnoticed. This month, the Federal Trade Commission ordered nine technology companies — Amazon, ByteDance (owner of TikTok), Discord, Facebook, Reddit, Snap, Twitter, WhatsApp (owned by Facebook) and YouTube (owned by Google) — to disclose information on how they collect and use personal information and target ads and other content. The orders also require the companies to disclose how they use algorithms to analyze personal information and promote user engagement, which will give regulators and the public more insight into how misinformation spreads on the platforms. This could bolster platform efforts during future elections and underscore the need for comprehensive federal privacy legislation and federally mandated transparency requirements.

Building on concerns that mounted after the 2016 election about foreign adversaries using political advertising to covertly sow discord, some platforms also took voluntary steps to provide transparency around their algorithmic advertising operations. For example, companies such as Facebook, Google and Reddit have introduced ad transparency libraries, which provide insights into the kinds of political ads that have run on their services, including which advertiser ran a campaign, what the content of their advertisements were, where an advertiser’s page is managed from, and whether a page changed its name and merged with others. The idea was that such information would make it clear who was putting out deceptive information — and easier to track its spread.

These libraries are, however, limited in many ways, which suggests that federally mandated transparency requirements could vastly improve public accountability for political advertising. For example, Facebook’s ad transparency library does not offer granular engagement and delivery data, such as how often an ad was shared or liked by users, or to what categories of users an ad was targeted vs. to whom it was delivered. Without this information, it is difficult to understand how many users engaged with an ad and what role algorithms played in determining who was exposed to misinformation. The database has also been marred with glitches that have resulted in the deletion of thousands of ads and limited its usefulness for researchers and journalists.

The work of third-party groups and researchers, especially those with expertise in certain types of disinformation or harmful actors, can also help monitor trends in political advertising and determine if and how these platforms are being used to spread misinformation. In this way, the work of these groups can be a valuable supplement to mandated transparency from platforms. If we want to stop the spread of electoral misinformation, we need to know how it spreads. With this information, it will also be easier to push back on the platforms’ lax enforcement of their own policies. Until we have that, our privacy will still be at risk — and our democracy will remain in the balance.

Christine Bannan is a policy counsel with New America’s Open Technology Institute, where she works on privacy, antitrust and other platform accountability issues.

Spandana Singh is a policy analyst with New America’s Open Technology Institute, where she researches and reports on policies and practices related to algorithmic decision-making, content moderation, transparency reporting, intermediary liability and disinformation.

The views expressed here are the writer’s and are not necessarily endorsed by the Anchorage Daily News, which welcomes a broad range of viewpoints. To submit a piece for consideration, email commentary(at)adn.com. Send submissions shorter than 200 words to letters@adn.com or click here to submit via any web browser. Read our full guidelines for letters and commentaries here.

ADVERTISEMENT