Kevin Allison is a Senior Editor for Signal. Based in Washington DC, he looks at how technology is reshaping global affairs. Kevin is also a Director in the Geo-Technology practice at Eurasia Group. Kevin holds degrees from the University of Missouri and from Harvard's Kennedy School of Government. He was also a Fulbright Scholar in Vienna, Austria and a 2015 Miller Journalism Fellow at the Santa Fe Institute. Prior to GZERO Media and Eurasia Group, Kevin was a journalist at Reuters and the Financial Times. He has lived in eight US states and has been an expat four times.
Fake news is a problem, everybody knows that. When technology helps bad actors spread lies and sow discord, it’s bad for democracy, which relies on citizens making considered judgments at the polls. It’s also a boon to authoritarians, who can stamp out criticism and bury unfavorable news by creating confusion about what’s true and what’s false.
The more interesting question is, what kind of problem is it?
Is fake news like cybersecurity – getting worse with time as attackers gain access to new tools and adopt new strategies? Or is it more like spam – a technology problem that once seemed overwhelming but has slowly been brought to heel as companies have invested more time and money in fighting it?
The answer will have big implications for the way the media are regulated in the future: if fake news is an intractable arms race, governments will be more tempted to mimic China’s model of erecting firewalls and censoring websites to stop the spread of potentially destabilizing information. If fake news is a solvable problem, or at least a manageable one, like spam, the future of online speech will be a lot more free.
Two recent data points offer some hope. Last week, we wrote about big social media companies’ decisions to ban the conspiracy theorist Alex Jones, the latest sign that the big websites where fake news often spreads are becoming more engaged with the problem. Less well publicized was the fact that DARPA, the Pentagon’s research and development arm, has been making progress in developing tools that can detect so-called “deepfakes,” the ultra-realistic fake audio and video created using artificial intelligence that some people worry could unleash a torrent of politically-motivated fakery.
Part of the problem with fake news is that people tend to believe what they want to believe – technology won’t solve that. But with industry and government both now paying closer attention, maybe, just maybe, technology can make the problem more manageable.