The Government Needs Better Data to Stop Election Meddling

Tech companies need to be more forthcoming with government officials about attempts misinformation campaigns, experts say.

Tech companies need to be more forthcoming with government officials about attempts misinformation campaigns, experts say. Shutterstock

Featured eBooks

Issues in City and County Management
CIVIC TECH: Case Studies From Innovative Communities
Data Driven Ways to Improve Public Health
 

Connecting state and local government leaders

Tech companies must be more forthcoming as misinformation campaigns ramp up before the midterm elections, internet researchers said.

Online platforms need to be more transparent with government to help fight increasingly sophisticated online misinformation campaigns led by Russia and other adversaries, social media experts and internet analysts told lawmakers on Wednesday.

Government leaders must also make it clear to adversaries there will be consequences if they attempt to disrupt elections, they said.

Nearly two years after officials first uncovered Russia’s attempts to meddle in the U.S. election, the conversation on Capitol Hill is shifting away from what happened in 2016 to how to stop similar campaigns in the years ahead.

In their testimony before the Senate Intelligence Committee, witnesses said Russian attempts to influence American politics continue even today and the government has a responsibility to lessen the impact of information warfare on society. They said that role could include alerting the public when influence attempts are uncovered, deterring foreign leaders from engaging in such campaigns and identifying potential threats in new technologies like artificial intelligence before bad actors can exploit them.

“Civil society, our media institutions and the technology sector can only do so much,” said John Kelly, CEO of the analytics firm Graphika. “The responsibility also lies with government to ensure any state actor eager to manipulate and harass faces consequences for their actions. It’s not just bots that are attacking us, and it’s not just algorithms that must protect us.”

The discussion about Russia’s online interference efforts has focused mostly on the 2016 presidential race, but witnesses repeatedly underscored the fact that those attempts didn’t stop after election day—if anything, aggressors have “stepped on the gas,” Kelly said.

Sens. Claire McCaskill, D-Mo., and Jeanne Shaheen, D-N.H., both revealed their offices were targeted by hackers during the last year, and Facebook on Tuesday removed 32 pages and accounts that were engaged in suspicious activity. Though the company didn’t directly point their finger at Russia, it noted the accounts employed many of the same strategies used by the Russia-based Internet Research Agency to interfere in 2016.

During the hearing, Kelly said his company found almost 30 percent of the shuttered IRA social media profiles are linked to active accounts, and a number of dormant IRA-run Twitter accounts could easily be reactivated, according to Renee DiResta, director of research at New Knowledge.

“What may have once been a failure to imagine is now a failure to act,” said Laura Rosenberger, director of the Alliance for Securing Democracy at the German Marshall Fund. “Russia is playing to its asymmetric advantage—this is a low-cost high reward kind of tactic. We also need to figure out what our asymmetric advantage is.”

One strategy Rosenberger suggested is raising the cost of waging misinformation campaigns for Putin through a combination of sanctions and banking restrictions, and different tactics could be more effective against other countries that engage in such activity, like China. Witnesses also suggested increasing media literacy could make voters less susceptible to misinformation.

Experts also placed significant onus on tech companies, calling on platforms to proactively cut off the channels used to spread misinformation and be more transparent when they discover questionable behavior.

“Much of what we’ve discussed today has come from evidence that’s been released very slowly over a two-year period, often after prodding,” said Philip Howard, director of the Oxford Internet Institute. “The time for industry self-regulation has probably passed.”

Lawmakers and witnesses both lauded Facebook’s recent disclosure of fraudulent accounts and called on other platforms to follow suit. The companies should also work with government to develop better technical tools to detect and attribute bots and automated accounts engaged in influence operations.

Howard said his organization discovered some 48 governments around the world used organized misinformation campaigns against their own citizens this year. Seven of those “authoritarian regimes” also had official budgets for targeting foreign voters. As nations like Turkey, China, Hungary and Iran ramp up their efforts, social media companies are the groups that possess the information that could shed light on the entire problem, and it’s critical they do so, he said.

“The more public, open data there is about public life, the faster we can catch these moments of manipulation,” Howard told the committee.

Sen. Mark Warner, D-Va., announced executives from Facebook, Twitter and Google would testify before the committee about their strategies for curbing online influence campaigns on Sept. 5.

Jack Corrigan is a staff correspondent at Government Executive's NextGov.

NEXT STORY: Senators Want to Double Funding for Putting Commercial Drones in the Air