Our elections are still not secure

Credits

Anders Fogh Rasmussen was prime minister of Denmark from 2001 to 2009 and secretary general of NATO from 2009 to 2014. Michael Chertoff was U.S. homeland security secretary from 2005 to 2009.

With the midterms approaching, our elections are still not secure. It has been nearly two years since it was discovered that Russia meddled in the 2016 U.S. presidential election, robbing American citizens of their fundamental right to freely elect their leaders. And yet, although American Homeland Security and election officials have made progress protecting voting infrastructure, neither the United States nor Europe has taken concerted steps to secure the 20-plus elections that will take place around the world between now and 2020.

For example, according to our research, the U.S. Congress has proposed eight election-related bills, the vast majority of them bipartisan, that face little prospect of being voted into law anytime soon due to the hyper-partisan nature of American politics at the moment.

There are some leaders who are demanding action. Last year, former vice president Joe Biden called for a 9/11-type commission. But that is not enough. The threat of election meddling also exists on the other side of the Atlantic and can only be effectively confronted if democracies work together.

This is why we joined with Biden, former Mexican president Felipe Calderón, former British deputy prime minister Nick Clegg and 10 other leaders from politics, academia and technology to establish the Transatlantic Commission on Election Integrity, which includes representatives from North and Central America, the European Union and Ukraine, which has been a testing ground for Russian interference. Our goal is to encourage governments and legislatures to take all available measures to raise public awareness about the risks of interference and to protect citizens’ most fundamental right: the ability to freely elect their leaders.

We are working on addressing “deep fake” technology, which can be used to generate manipulated images and video that look real. This technology gained notoriety in early 2018 when it was used to digitally superimpose celebrity faces on the bodies of adult film actors. The technology is advancing at an unsettling pace. The production of deep fake videos could soon be broadly wielded — a terror cell or lone-wolf attacker could one day leverage this technology to spread alarming messages across our societies.

The antidote to deep fake videos lies in both technological tools and better public awareness. We are working with partners, including ASI Data Science, a data consulting company, to develop warning systems that alert users when a piece of audio-visual content is suspected to be fake. Such a warning could appear in the title or as a watermark over the video content to indicate potentially falsified content. These tools would be useful for individuals using sites like YouTube and for journalists who want to verify a video before recirculating it.

Our commission also deployed a new software tool to track disinformation in real time. It uses an algorithm to scan public data and check for several quantitative indicators that are typical of social media disinformation, including the rate at which new accounts are created and the share of comments on a social platform that come from automated accounts. When drawing conclusions, we consider these quantitative factors alongside the style, tone and content of the messages spread by automated accounts.

We used this tool, for example, during Macedonia’s September referendum on changing the country’s name, an important decision that could have potentially accelerated the country’s entry into NATO. In the month before the vote, we found an uptick in the number of new bot accounts created on Twitter as well as an increase in the activity of existing bot accounts. Together, these accounts made up 10 percent of the national Twitter conversation, a figure higher than in the recent Mexican and Italian elections.

Many of those accounts were used to amplify narratives calling for a boycott of the Macedonian vote. The objective of those behind the campaign was simple: make sure the turnout falls below the 50 percent threshold required for the result to be valid. The tweets, combined with Facebook pages reinforcing the messages, most likely contributed to a 37 percent voter turnout when Macedonians went to the polls.

Our goal when tracking real-time disinformation campaigns is to sound the alarm to the public, the media and the government. We see this course of action as the most impactful, more so than focusing on the gargantuan task of policing global social platforms. As malign actors develop new, cutting-edge tactics like creating political deep fake videos, the disinformation illiteracy of our citizens will only have more dire consequences on our democratic institutions. Citizens must learn how to identify disinformation when it appears alongside posts from more trusted sources.

Through the transatlantic commission, we hope to bring real change in stemming the tide of this powerful weapon. But the commission’s work alone will not bring systemic change. Governments across the globe, technology companies and individual citizens must each play their role in preserving democracy’s greatest assets: free and fair elections, an independent media and our citizens’ freedom of thought.

This was produced by The WorldPost, a partnership of the Berggruen Institute and The Washington Post.