Expert Speak Digital Frontiers
Published on Oct 08, 2020

Given the complexities and the possible excess of making the Internet (more) fragmented, should we consider doing more than just demanding liability from the platform?

Going beyond third-party liability to regulate content: Examples from Germany and Singapore

The 2015 RightsCon witnessed the launching of The Manila Principles on Intermediary Liability, a civil society initiative which proposes that the internet intermediary should not to be held liable for third party content — in particular if they are not involved in modifying. The Manila Principles put forward the principles of proportionality, transparency, and accountability for due process in regulating content. At the time, demanding responsibility from Internet intermediaries may have seemed inappropriate, it was not the correct thing to do.

Over the course of time, however, there has been a wind of change. As internet intermediaries have grown to play an essential role in developing, disseminating, and amplifying harmful and illegal content such as fake news and terrorism-related content — the notion of total immunity to third party content no longer applies. How can we make sure that the internet remains as it is intended, a global entity which can be a relatively safe space for people to interact?

"As internet intermediaries have grown to play an essential role in developing, disseminating, and amplifying harmful and illegal content such as fake news and terrorism-related content — the notion of total immunity to third party content no longer applies."

At the United Nations General Assembly in 2017, Theresa May addressed the urgency of technology companies to take further steps in tackling this issue following several attacks in the UK. In 2017 alone, terrorists have launched attacks in the UK at Westminster (March), Manchester (May), London Bridge (June), Finsbury Park (June) and Parsons Green (September).The global efforts to counter terrorist content also intends to the Paris Call initiated by France in 2018. In the United States, <1> President Donald Trump in May 2020 released an executive order to limit the CDA Section 230. Meanwhile for the general audience, it would be difficult to erase from memory the Cambridge Analytica and Facebook scandal which shone a spotlight on the liability issue, in particular the handling of personal data.

Governments all over the world are now asking online platforms to be responsible, and to be held liable for the content they play a role is disseminating. Regarding this, characteristics of Singapore and Germany are included in this paper. Both countries have been at the forefront in drafting such laws, and their actions have inspired many other countries to follow suit. This paper looks at several key issues within those regulations and discusses whether content regulation should go beyond liability in terms of law and regulation.

Singapore enacted the Protection from Online Falsehoods and Manipulation Act (known as POFMA) on 8 May 2019. In brief, the act aims to prevent the communication of falsehood of fact in Singapore and provides measures to counteract such actions which include suppressing the financing system and enhancing disclosure of information concerning paid content.

"Both Singapore and Germany have been at the forefront in drafting laws, and their actions have inspired many other countries to follow suit."

Germany also did not waste time in passing its Network Enforcement Act (known as NetzDG or hate speech law). The discussion over NetzDG started in spring 2017 when the Minister of Justice and Consumer Protection delivered the first draft. In March 2017, the draft was submitted to European Union for review on its conformity with EU Regulations, underwent public hearing in June 2017, then a month after Germany’s election in September 2018, the NetzDG was passed.

Both laws regulate content made available through Internet intermediary in different ways. The table below presents serval key issues:

Key issues Germany Singapore
Purpose and scope

Focuses on social network platforms with more than 2 million users in Germany.

Hate speech that violates 22 provisions under Germany’s criminal code.

Relies on user complaints for network decision to take action.

Internet access service providers that fall within the scope of Telecommunication Act (Cap 323) such as telecom systems and services, Internet intermediaries and internet intermediary services.

Online falsehood and manipulation.

Gives full authority to any Minister

Definition and compliance

Telemedia service providers with 2 million users. The service providers should receive a complaint from users then taking an action upon the complaint. A report relevant to the complaints and its action is provided publicly.

Take down the reported content.

Internet intermediary is any person who provides any intermediary services, ranging from access materials, transmitting from one end user to another, and services displaying end users after applying a series of algorithmic methods. Taking action under this definition can involve blocking of access in specific page.
Implementation

Social media network as a responsible party.

Liable to any party across jurisdiction if it impacted Germany.

Put anyone, be it individuals or legal entities in charge.

Liable to any party across jurisdiction if it impacted Singapore.

A key lesson from the different ways in which the two countries regulate content is that there is no one-size-fits-all approach. Policy discussions over this issue also require an understanding of technological developments which sometimes even regulators cannot predict. Despite the controversies around NetzDG, the demand to govern illegal content still exists. A few years after NetzDG, in September 2018 the European Commission announced a set of new rules specifically governing terrorist content. Civil society organisations sent a letter to the European Union concerning its policy on Preventing the Dissemination of Terrorists Content Online, and in particular the Regulation’s call for Internet hosts to use “proactive measures” to detect terrorist content. Their concern was that if this Regulation adopted, it would certainly lead platforms to adopt poorly understood tools such as Hash Database. <2>

Germany and Singapore maintained their decision to regulate in a manner they see fit, regardless the feedback that they have received from relevant stakeholders. It is an example of how governments prefer to have it their own way in handling this issue. Given the complexities and the possible excess of making the Internet (more) fragmented, should we consider doing more than just demanding liability from the platform? Regulating the content will never enough, and bearing in mind the different backgrounds and context, each country will have its own definition and ways of compliance. This would bring us to a very confusing place (more so than it is already) to operate.

"Regulating the content will never enough, and bearing in mind the different backgrounds and context, each country will have its own definition and ways of compliance."

It is time for us, to rethink whether regulating content is enough to tackle the whole issue of internet intermediary. Just recently, US Senate subpoenas Google, Facebook and Twitter. One of the discussions raised was whether the senate can come up with right questions to the platform, as most of them might not totally understand the way platform works. We might want to reevaluate the question into the other way around. How should platform be more transparent in their work so that they could be easier to have a dialogue with? We might want to check on the business processes that makes the Internet work: we should also have further discussion on how platforms operate rather than focus only on the impact of their third-party content. It is good to have hash database among platforms to curate content as a starting point. However, we need to go deeper to look at how they have worked and impacted us to demand some level of transparency and accountability from the platforms. It would be ideal and useful if the platforms develop a public report on how our data are being extracted, managed, and even monetised. With trust as a key element of Internet interactions on the platform, it would be good to demand more of their accountability process, beyond third party liability.


<1> United States has the Communication Decency Act (CDA) Section 230 in 1996. The CDA 230 gives internet intermediaries the immunity to publish third party content.

<2> Hash Database is the database of hashes of known terrorist videos and images. The database so far has gathered over 40,000 hashes. Source: https://ec.europa.eu/commission/presscorner/detail/en/IP_17_5105

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Shita Lakshmi

Shita Lakshmi

Shita Lakshmi is Executive Director of Tifa Foundation. She is one of the founders of Indonesia Internet Governance Forum (ID-IGF) as well as Indonesia CSOs ...

Read More +