You are here

Regulating speech in the new public square

Feb 19,2019 - Last updated at Feb 19,2019

By Madeleine de Cock Buning and Miguel Poiares Maduro

 

FLORENCE — Today, debates about public issues play out on social media, people receive their news via digital platforms and politicians pitch their policies using these same media. The Internet is our new public square.

In the public square of old, journalists and editors served as gatekeepers and acted as referees. Human news aggregators set the agenda and provided audiences with credible information and a diversity of views. We trusted them because of the professionalism and integrity of their editorial processes.

In the new public sphere, this model of journalism, and of journalism’s role in sustaining democracy, has become obsolete. Traditional media no longer play a dominant gatekeeping and agenda-setting role. Fake news can reach multiple jurisdictions at once.

But so can public and private measures that censor speech. The challenge is to redefine the parameters of civil discourse in the new public sphere without restricting pluralism. Recent examples highlight the risk of throwing the baby out with the bathwater.

Despite the ominous headlines, the influence of fake news on political decision-making appears to be limited. According to the Reuters Institute for the Study of Journalism at the University of Oxford, the reach of such content is largely restricted to groups of believers seeking to reinforce their own views and prejudices. But that does not make digital deception any less dangerous. Fake news feeds, and is fed by, polarisation, and, paradoxically, the more it is discussed, the more disruptive it becomes.

That is because fake news undermines trust in all forms of media and reinforces the view that it is impossible to discern fact from fiction. When people do not know what they can believe, journalists’ ability to police the powerful is weakened. This trend will only worsen as “deep-fake news”, bogus images and videos that appear real, becomes more ubiquitous.

Clearly, the vulnerabilities of the digital public sphere must be addressed. Some argue that the solution is to block questionable websites or demote search results. Facebook, for example, censors duplicitous posts and has created an election “war room” to fight disinformation. Other global platforms, like Google and Twitter, have considered similar steps, and all three are being pressured to give authorities access to the private data of users who publish fake news or make defamatory statements. But we believe that these steps, while seemingly prudent, are deeply misguided.

At the heart of any strong democracy is a political consensus and arbitration that depends on the public’s ability to debate and disagree. It is not up to private entities, or public institutions for that matter, to censor this process. Rather, we should be working to ensure that citizens have access to a broad array of opinions and ideas and understand what they are reading, viewing or hearing. Freedom of expression includes the right to receive and impart information without interference, which implies the corollary values of media freedom and media pluralism, as enshrined in the EU’s “Charter of Fundamental Rights”. Studies show that most people prefer reliable and pluralistic news sources; policymakers’ job is to enable them to realise this preference.

A March 2018 report to the European Commission by the High-Level Group on Fake News and Online Disinformation, which one of us, de Cock Buning, chaired, offered a roadmap and the recent European Commission Action Plan provides a good starting point. But more needs to be done.

There is no silver bullet to combat disinformation. Only multi-stakeholder approaches, that spread responsibility across the news ecosystem and take into account the fundamental rights involved, can provide adequate defences against disinformation.

For example, professional media must do more to guarantee the veracity of their coverage. Fact-checking technology can help, as long as it is kept free of political and economic influence. Google, Facebook and Twitter should stay out of the fact-checking business.

“Big Tech” is starting to take responsibility by committing to a code of practice based on the 10 key principles from the High-Level Report. But Big Tech can contribute in other ways, such as by providing client-based interfaces for curating legitimate news, ensuring diversity in social-media timelines and making a high priority of reposting fact-checked information. Platforms can also improve transparency in how they use data and code algorithms. Ideally, these algorithms should give consumers more control over editorial preferences and integrate editing and fact-checking applications developed by reliable media organisations. 

Platforms must also clearly identify news sources, especially paid political or commercial content. Many of these more immediate measures can and should be implemented in advance of the European Parliament election in May 2019.

We also need new international collaboration and better jurisdictional rules to ensure that laws and regulations protect victims of fake and offensive news, without restricting free speech or undermining the rights of whistleblowers. In particular, these conflicts should not be legally settled where only one of the parties has effective access to justice.  

Finally, platform companies should cooperate with schools, civil-society groups and news organisations to strengthen the public’s media literacy. Data show that consumers in some markets still have difficulty distinguishing fake news from real.

Well-intentioned efforts to scrub the new public square of disinformation will certainly backfire; only consumers can marginalise fake news. We cannot allow private companies or governments to decide what people should know. The history of democracy is clear on this point: pluralism, not private or public censorship, is the best guarantor of truth.

 

Madeleine de Cock Buning, professor of digital politics, economy, and societies in the School of Transnational Governance at the European University Institute, was chair of the European Commission’s High-Level Group on Fake News and Online Disinformation. Miguel Poiares Maduro, director of the School of Transnational Governance at the European University Institute, was a member of the European Commission High Level Group on Media Freedom and Pluralism. Copyright: Project Syndicate, 2019.
www.project-syndicate.org

up
52 users have voted, including you.


Newsletter

Get top stories and blog posts emailed to you each day.

PDF