Fake News, Freedom of Speech and the Weight of Tech Companies

Why fake news will be the challenge of the 21st century

Photo of Donald Trump in a French newspaper
Source: Charles Deluvio on Unsplash (edited)

Fake news: a new and scary phenomenon

“Fake news,” Buzzfeed started to use those words in late 2016. The term went viral within a year. We could finally describe a phenomenon we saw emerging around the globe. History has its fair share of lies, deception, and propaganda, but the birth of fake news was due to the democratization of social media.

Why is the phenomenon unprecedented?

The 20th century was rich in fantasized plots. We wondered about or government’s dirty secrets. We speculated on their involvement in historical events, like Man’s first steps on the moon, Kennedy’s death, or September 11. What will future generations know about us that we don’t, once states secrets are revealed?

But the 21st century arrived with new digital tools, allowing a global and continuous use of the freedom of speech: the internet and social media.

Today, all citizens can debate on an equal footing. Barriers between sources of information broke down. We don’t need to be journalists, experts, or politicians: In a few clicks, we can all be read, commented, and shared. Our voices carry as far as we can rally to our list of followers.

Social media also provides social proof to information, giving users a sense of proximity through other human interactions. This can explain why users often don’t verify the veracity of information. The more people interact with a post, by sharing, commenting or liking, the more it is likely to receive interactions. In other words, virality is a self-fulfilling circle.

With almost every linked shared comes its commentators, not always objective, and who sometimes distort the best-intentioned words, not always on purpose. Of course, some also intentionally falsify events. We are drowning in a mass of contradictory information. As a result, it has become increasingly difficult to sort out the true from the false.

Edson Tandoc, who studied the phenomenon, found that the public plays a crucial role in constructing fake news.

“While new is constructed by journalists, it seems that fake news is co-constructed by the audience, for its fakeness depends a lot on whether the audience perceives the fake as real”.

Yes, the fake news issue is unprecedented because, in the Internet era, the spread of ideas has gone viral. We are living a sort of epidemic of rumors.

Beyond social networks, alternative media also play a role in diffusing fake news, leveraging the public mistrust in mainstream media. Among those publishers, many are not transparent about the politicized or militant nature of their content. The information they provide looks journalistic but does not always respect journalism’s ethics regarding objectivity.

Why are fake news dangerous?

We could ask ourselves, “Isn’t giving everyone a medium to use their freedom of speech a good thing?”

PR Philippe Mouron, a French law scholar, raised this question:

“Social networks and [online] platforms guarantee a multidimensional exercise of freedom of expression, which is unprecedented in the history of communication […] From this point of view, social networks correctly fulfill their democratic mission by providing perfect equality of access to the means of communication of ideas and information. This is why the debate is necessarily polluted by mechanisms of manipulation that have been able to thrive in these services. But isn’t the ultimate purpose of this fundamental freedom”?

Freedom of speech involves giving everyone a chance to speak, even when their thoughts are disturbing. There are benefits, even in the spread of fake news. Social media are digging out ideas and questions that would remain below the surface otherwise. The process of fact-checking, born in opposition to fake news, brings the truth out of the discussions.

However, not everyone takes the time to follow debates until their resolution. And even when everyone would follow each story to its source, our bias and beliefs can lead us to refuse the answers provided by fact-checkers.

We can’t adopt a strategy of laissez-faire because we can’t ignore the consequences that fake news have on our societies. Lawmakers are urging social networks to regulate their content, as they fear misinformation becomes a threat to public order, safety, and health.

Indeed, terrorist movements are regaining popularity, like neo-Nazism and white supremacism. Distrust against vaccines has also grown. In France, half of the population doesn’t want to get vaccinated against the coronavirus. Fakes news can also influence the outcome of the elections (Bolsonaro in Brazil), or democratic processes (Trump supporters breaking into the Capitol).

The early days of the Internet are over, and the time for regulation has come.

Setting new limits for the freedom of speech

Freedom of speech is the foundation of democratic societies

In the words of PR Michel Verpeaux, freedom of speech expresses “the identity and autonomy of individuals and conditions their relationships with others in society.”

Freedom of speech was theorized by the Philosophers of the Enlightenment in the 17th century. It became the foundation for Democracy in the constitutional texts of the late 18th century, like the first amendment to the Constitution of the United States in 1791, and the article XI of the Declaration of the Right of Man of the Citizen in 1789, after the French Revolution.

Inscription of the First Amendment (December 15, 1791) in front of Independence Hall in Philadelphia
Inscription of the First Amendment in front of Independence Hall in Philadelphia. Source: Wikipedia

If sovereign power is handed over to the people, they must be able to express their opinions freely, without the fear of repression. Governments must not interfere with the circulation of ideas.

But philosophers never claimed freedoms were absolute. Democratic states always have limited freedoms, whether to protect public order or other fundamental rights and freedoms. Hateful or offensive speech, call for violence, sometimes even religion and morality can be boundaries for one’s right to speak his mind.

Those limitations are set according to democratic processes, and the judicial power must control them.

Tech companies: new powerful actors in the public debate

In the 18th century, the challenge was to protect freedom of speech against state censorship. But today, it is much more about Tech companies’ moderation policies. Facebook, WhatsApp, Twitter, Google: Big Tech’s influence exceeds states borders.

Content hosting platforms have legal obligations when it comes to moderation. For example, in the United States, they could be held responsible if they don’t remove illegal content when users report it (Community Decent Act, Section 230).

The coronavirus crisis pushed aggressive moderation practices from social networks, using systematic withdrawals to avoid responsibility: Youtube decided to remove conspirational videos linking the coronavirus to 5G. Likewise, Twitter suppresses posts that contradict official WHO recommendations. Twitter’s exclusion of Trump for his anti-democratic behavior is the ultimate example.

Withdrawal and banning policies are nothing but trivial. In France, the Constitutional Council censured the Avia bill, whose aim was to strengthen platforms’ responsibility in the war against fake news. The Council issued a press release explaining the new bill “can only encourage online platform operators to remove content that is reported to them, whether or not they are manifestly illegal.”

The Constitutional Council said the legality of content should not be determined by tech companies (or by their human moderators and bots). The judge must be able to control the liceity of contents at last, not social networks.

We use to think that state censorship was the only threat to public freedoms. But today, there are inequalities in the balance of power opposing users and platforms. Social network moderation rules are so powerful that we can wonder if they weigh as much as state censorship. In the words of an Italian court: “the nature of freedoms does not change whether the threats come from a public authority or from an individual.”

The jurisprudence of the Supreme Court shows how difficult is the evaluation of the news relationships between privates actors of unequal forces:

On the one hand, the Supreme Court recognized that accessing social media was a fundamental right (Packingham v. North Carolina, 2017). On the other hand, another jurisprudence relies on the 1st Amendment to protect search engines’ editorial freedom. According to this jurisprudence, search engines are free to select and reference content according to their own “editorial” choices (Zhang v. Baidu US District Courts Southern District of N-Y, 2013)

In the Zhang v. Baidu case, the plaintiff requested Baidu -a Chinese search engine- to stop de-referencing their content. Baidu refused, as the content was criticizing the Chinese government. The Supreme Court said Baidu referencing choices were rightful because they were equivalent to editorial decisions.

Understanding this balance of power between private actors (individuals and tech companies)is necessary. If we don’t take it into consideration, we risk pushing people out of the public debate.

Now most Americans don’t care about Baidu referencing practices. We get it, China is not so fond of freedom of speech. But what this jurisprudence really means is that someone could pay Google to make certain “editorial choices.” It already exists. It’s called “sponsorship.” But what about negative selection, like in Baidu’s case? What if someone paid to have certain content removed?

We already see this happening with Youtube advertising practices. For the sake of protecting advertisers’ interests, Youtube is pushing away sensitive content. For the moment, Youtube is not removing this content, and to be fair, it would make bad publicity. But as the French Constitutional Council pointed out, our policies against fake news will incite platforms to remove even more reported content.

The French institution is not alone. Organizations promoting fundamental freedoms and human rights share similar worries. The Center for Democracy and Technology also stressed concerns about automated moderation, as it increases the risk of false positives.

Human moderators and bots are not legal professionals. Nor do they have the time to fact-check every reported post. And when they would find such time, limiting the authorized content to facts is impossible. If we can often tell if an article is more or less intentionally falsified, many so-called “fake news” are not pure and simple lies. Have the speculation about vaccines led to risks or public safety? Yes, they did. But preventing people from questioning the safety of products, especially health products, would be an even more dangerous threat against freedom of speech.

The Office of the High Commissioner for the Human Right published a statement against systematic withdrawal practices that occurred during the pandemic:

We share the concern that false information about the pandemic could lead to health concerns, panic and disorder. In this connection, it is essential that governments and internet companies address disinformation in the first instance by themselves providing reliable information.[…]

Resorting to other measures, such as content take-downs and censorship, may result in limiting access to important information for public health and should only be undertaken where they meet the standards of necessity and proportionality. Any attempts to criminalise information relating to the pandemic may create distrust in institutional information, delay access to reliable information and have a chilling effect on freedom of expression.

If we must find a way to prevent the spread of intentionally falsified content, we cannot -and we should not- prevent the public from sharing opinions. We must also take precautions when removing content, as authentic information could transpire from the mass of fake news. They could take various forms, like whistleblowers altering the population.

Finally, there is the issue of controversial scientific content. In that regards, PR Philippe Mouron commented on the -very- questionable claims of PR Didier Raoult about hydroxychloroquine:

“The example of Professor Didier Raoult’s video qualified as “fake news” by the [newspaper] Le Monde perfectly illustrates these limitations. […] Nor can one rely on the contrary opinions of other researchers in the same discipline, the confrontation of opinions being the very basis of the freedom of research. Calling this video “wrong” information would itself be misinformation from that point of view! As we have seen, the claim has fortunately been rectified, which does not preclude from debating on the methods used by Professor Raoult, for which there is a real serious discussion.”

Obviously, social media moderators are not qualified to resolve scientific disagreements and should not interfere with the share of opinions regarding scientific claims, as long as they are not intentionally misleading.

False information has always been around, but the fake news phenomenon is very different from what previous generations have known. The way we interact with content, in particular the way we share information, has dramatically changed. The sharing of content is more viral than ever before, making it easier to spread misinformation.

This new environment is likely to generate risks to public safety. We witnessed a resurgence of violent movements, rapid changes in public opinion regarding health treatments, and jeopardized democratic processes.

But we must keep in mind that fake news is, above all, a matter of freedom of speech, and that regulation in this area can endanger fundamental freedoms. Social networks, whose moderation practices are at the heart of the fight against fake news, are not qualified to decide on the legitimacy of sharing opinions and information. In a democracy, judges should have the ultimate decision regarding the use of censorship.

Some jurisdictions noticed that there were inequalities of strength between users and social networks. We must pay attention not to encourage platforms to remove questionable content automatically, or we could push people out of the public debate, seriously endangering the freedom of speech.

French writer, jurist, youth worker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store