By Prof. Dr. Josef Franz Lindner, who teaches Public Law, Medical Law and Philosophy of Law at the University of Augsburg
In an article dated 4 November 2024, Hannah Ruschemeier paints a positive picture of so-called “trusted flaggers”, as enshrined in the EU’s Digital Services Act (DSA). She claims that by acting as reporting centres for illegal online content, they contribute to ‘democratic hygiene’. However, the article generously ignores the actual problems with the rule of law and fundamental rights. However, the specific implementation of the Digital Services Act (DSA) conceals unacceptable dangers for freedom of expression and democracy.
Seemingly harmless: the reporting system of the Digital Services Act
Trusted flaggers – ‘trustworthy whistleblowers’ – are enshrined in the Digital Services Act (DSA), an EU regulation. According to Art. 22 (1) DSA, these are organisations (mostly NGOs) certified by the member states that act as reporting offices for illegal online content. They accept reports of illegal content from third parties, but can also search online platforms such as X for illegal content themselves. They report this content to the respective intermediary service provider, who is obliged to process such reports as a matter of priority and without delay and to take a decision, i.e. to delete the content if necessary. However, the DSA only allows illegal content to be reported and deleted. Article 22 of the DSA is implemented in Germany by the Digital Services Act. Accordingly, the ‘Digital Services Coordinator of the Member State’ (DSC) grants a body the status of a trusted flagger upon request if three conditions are met: the body must demonstrate particular expertise and competence in identifying, detecting and reporting illegal content, must be independent of any online platform provider, and must carry out its reporting activities diligently, accurately and objectively.
At first glance, such a reporting system seems useful for keeping illegal content off online platforms as much as possible. This is a legitimate purpose. It is true that every user of online services already has the option of reporting illegal content and demanding its removal. The DSA supplements this individual right with an institutional, state-licensed reporting infrastructure. This raises fundamental questions.
JD Vance’s speech at Munich is worth watching in full. It’s a much more nuanced and broadly reasonable argument than I think Europeans are used to hearing from the American right.
— Sam Bowman (@s8mb) February 15, 2025
Vulnerability: What is illegal content?
The reporting system set up by the DSA is limited to illegal online content. Legitimate content, especially permissible expressions of opinion, may neither be reported nor deleted. But what is illegal content? Art. 3 lit. h DSA provides an answer. According to the legal definition,
‘unlawful content shall mean any information which, as such or by its reference to an activity, including the sale of products or the provision of services, is not in conformity with Union law or the law of a Member State, regardless of the exact subject matter or the nature of the law concerned’.
This definition raises doubts in several respects, such as: Which law is to be applied in each case? For example, can a Hungarian trusted flagger report content from a German user that is prohibited under Hungarian law but lawful under German law – with the result that the country with the most repressive right of expression sets the standards? For German law, the question arises: Which content is actually illegal? There are, without doubt, clear cases here: false information about products is illegal under competition law and there are also clear cases of punishable statements under the German Criminal Code (e.g. under Section 130 of the German Criminal Code). However, disinformation as such is neither illegal nor punishable. The dissemination of fake news is not a criminal offence per se. And even with the offences involving statements, the matter is often anything but clear. Whether a statement that at first glance appears to be unacceptable, disturbing or otherwise met with rejection can be qualified as defamation or incitement to hatred is often anything but trivial. This is because the Federal Constitutional Court has erected a massive dogmatic bulwark to protect the freedom of expression protected by Article 5 (1) of the German Basic Law against encroachment. At the level of the scope of protection, it is extremely broad (BVerfGE 124, 300/marg. no. 49):
‘Opinions are characterised by the subjective relationship of the individual to the content of his statement (see BVerfGE 7, 198 <210>). They are characterised by the element of taking a position and holding a view (…). In this respect, they cannot be proven to be true or false. They enjoy the protection of the fundamental right, without it mattering whether the statement is justified or unjustified, emotional or rational, or is considered valuable or worthless, dangerous or harmless (see BVerfGE 90, 241 <247>). In this context, citizens are also not legally required to personally share the values on which the constitution is based.’
To do this, 🇪🇺 tries to abuse the #DSA, the most terrible piece of European legislation in existence (and the competition is stiff).
They try to do this by 'encouraging' the adoption of 'voluntary' Codes of Conduct.
This is what I wrote about it in 9/23:https://t.co/eJJ8BwIO6b pic.twitter.com/VueliGTGVt— Tijl De Bie (@TijlDeBie) January 17, 2025
And when determining whether a statement is actually criminally relevant, the BVerfG focuses on the contextual dependency. Each incriminated statement is to be interpreted in the light of the context of the speech act. Depending on the context of the act of expression, the statement ‘soldiers are murderers’, for example, is a punishable insult, i.e. illegal content in the sense of the DSA, or a lawful expression of opinion.
The open flank of a reporting system that focuses on illegal content is as obvious as it is significant: there is a real danger that trusted flaggers will report a not insignificant number of lawful expressions of opinion. This risk is virtually conjured up by the enforcement notices of the Federal Network Agency. In a ‘guideline’ of the DSC ‘for certification as a trusted flagger’, the possible illegal content that trusted flaggers are supposed to ‘detect’ (sic!) includes: hate speech, discrimination or content that has ‘negative effects on civil discourse’ (sic!). Such indistinct and legally intangible categories carry the risk of becoming a gateway for reporting politically undesirable but (still) lawful opinions. The politically legitimate goal of protecting the internet from illegal content is in danger of tipping over into a general control system. If pointed criticism that is not punishable by law is reported as ‘hate speech’, politically inconvenient views as ‘incitement’ or arguments that deviate from the mainstream as ‘negative for civil discourse’ by the person reporting it and then deleted by the service provider, freedom of expression is threatened in its substance, democracy as a competition of opinions is specifically endangered. The aforementioned ‘guideline’ urgently needs to be revised.
How the EU's #DSA and its use of "trusted flaggers" threatens free speech: "The politically legitimate goal of protecting the Internet from illegal content is in danger of tipping over into a general control system." https://t.co/B3w5BVYdOR pic.twitter.com/PvaKPifqYx
— Pieter Cleppe (@pietercleppe) November 11, 2024
A structural threat to free speech
The fact that trusted flaggers regularly lack the legal expertise and time to assess whether an expression of opinion is illegal or not – apart from obvious cases – structurally endangers the fundamental right of freedom of expression. In legal practice, it is not uncommon for statements to be qualified as unlawful or permissible only after going through the criminal justice system in a process of appeal, and for the specialised courts to then have to be corrected by the Federal Constitutional Court. Are trusted flaggers supposed to be able to assess this depending on the context, under time pressure and the modalities of a mass influx? An absurd notion. It is therefore foreseeable that not only lawful content will be reported (and subsequently deleted) in individual cases.
One could counter that it is not the trusted flagger who is interfering with freedom of expression, but rather the platform provider who is responsible for deleting or otherwise restricting the content. Such a view is formally correct, but it fails to recognise the actual effect of the report. This obliges the provider to deal with the report as a matter of priority and immediately. The reporting (of lawful content) itself already entails an increased potential risk for freedom of expression. In addition, in the case of Trusted Flagger reports, the intermediary is more likely to be inclined to delete in case of doubt, since it is, after all, a state-licensed and often state-funded reporting office, which will react strictly to its reports in order to avoid state sanctions in the event of non-deletion.
The #DSA in its current form is the end of free speech on large online platforms in #Europe. It must be undone, and redone focusing on illegal content only.https://t.co/eJJ8BwIggDhttps://t.co/B0CNTfht8qhttps://t.co/lMG6XXbnIX
1/2 pic.twitter.com/KlZU5XCZar
— Tijl De Bie (@TijlDeBie) October 11, 2023
Reports of online content by trusted flaggers to platform providers authorised and prepared to delete content thus already constitute a (de facto) infringement of Article 5 (1) of the German Basic Law. One does not have to call this – in this respect Hannah Ruschemeier is right – ‘censorship’ because one usually associates this term with the pre-censorship prohibited by Art. 5 para. 1 sentence 3 of the German Constitution, which is not present here. But post-publication censorship (of lawful content), which is effectively promoted by reports from trusted flaggers, is also an infringement of freedom of expression. In addition, there are psychological chilling effects: online users may tend towards self-censorship when in doubt.
Precarious lack of transparency
One can perhaps still speak of a ‘transparent procedure’ – as Hannah Ruschemeier calls it – with regard to the approval of trusted flaggers according to the requirements of Art. 22 DSA, but not with regard to the reporting behaviour of the trusted flaggers themselves. In this respect, there is a lack of transparency for the user affected by the report. This is because the user is not informed of reports of his content by a trusted flagger. Only after a deletion or comparable reaction does the platform operator notify the user in accordance with Art. 17 DSA. The ‘unsuccessful’ report, however, goes unnoticed. It may be objected that the person concerned has so far not been informed about reports by private third parties either. This objection fails to recognise that the reports of a trusted flagger have a sovereign effect in that they trigger a priority and immediate duty of the service provider to check the content, which does not exist in the case of conventional private reports. The report itself already indicates an increased risk to freedom of expression. Therefore, it is in the user’s legitimate interest to know whether and to what extent a trusted flagger reports content they have created. Only then can they determine whether lawful content is also being reported (in violation of the DSA) and have this determined by a court.
In a constitutional state, in which considerable weight is attached to freedom of expression as a fundamental right, it is unacceptable for expressions of opinion to be subject to a (partially) secret reporting system – trusted flaggers are not secret services. The secrecy of unsuccessful reports is not acceptable under constitutional law and is not explicitly required by the DSA. So the problem is not with the DSA itself. It does not prohibit the trusted flagger from notifying the person concerned. There are several ways to achieve transparency: either the trusted flagger hears the person concerned in direct or appropriate application of Section 28 VwVfG before the report is filed – which he would have to do regularly in order to determine the context relevant for assessing the content of the report – or he informs the person concerned after the report has been filed. Silence is not an option. The Kafkaesque practice of leaving the person concerned in the dark also prevents the judicial protection offered by Article 19 (4) of the Basic Law (see below).
Legal protection against trusted flaggers
Every single report by a trusted flagger substantiates the service provider’s abstract and general duty as set out in Art. 22 DSA for the specific individual case, for the specific content reported. This is how the state-licensed trusted flagger differs from an ordinary private reporting office without state certification. It exercises functional sovereignty. The functionally sovereign activity not only burdens the intermediary service provider, but also the person concerned whose content is reported. This is the case in two respects: firstly, the author must tolerate the priority and immediate examination of his content and, secondly, he must accept a de facto increased threat to his freedom of expression. Both the provider, who is obliged to give priority to the review, and the affected author may have their rights violated if the trusted flagger takes action: the affected party in Article 5 (1) of the German Constitution in the case of reporting lawful opinions, and the provider in his or her fundamental right to freedom of occupation. This activates – in both cases – the legal protection guarantee of Article 19 (4) of the German Constitution. Anyone who is affected by a report from a trusted flagger has the right to have it reviewed by a court. It is true that the person concerned has the option of taking legal action to have the content deleted. However, the DSA does not provide for a judicial review of the (unsuccessful) report itself.
This does not preclude, however, that the law of the member states provides for corresponding legal remedies. Just as EU law leaves room for procedural law in the member states, provided that it is not specifically excluded by the EU legal act in question and does not conflict with the principles of effectiveness and equivalence, the member states’ legal protection system also remains unaffected within this framework. Although the trusted flagger reporting system itself is based on the EU DSA, legal remedies in the member states against the implementation, i.e. reports from trusted flaggers, may be considered (see also Art. 19 (1) 2 TEU). If trusted flaggers are qualified as agents (see below) and the report is qualified as an administrative act due to its specific and individual regulatory effect, an action for a declaratory judgment to the administrative court in analogous application of Section 113 (1) sentence 4 of the German Code of Administrative Court Procedure (VwGO) comes into question, since the report is no longer relevant at the time it is dealt with by the service provider. If, on the other hand, the report is merely classified as a real act, an action for declaratory judgment under § 43 VwGO is to be considered.
JD Vance made it emphatically clear that if Europe wants to join America’s AI revolution, then it needs to get rid of the Digital Services Act and stop censoring people for their opinions. pic.twitter.com/mY8glsDbld
— Ian Miles Cheong (@stillgray) February 13, 2025
Trusted flagger as a trustee
The realisation of the legal protection offered by Art. 19 para. 4 GG against trusted flaggers depends on how they are classified in the system of German administrative law. Ruschemeier simply leaves this central question unanswered. The DSA itself does not contain any guidelines on this. It only standardises the appointment of trusted flaggers and the effect of their reports on providers, otherwise leaving the organisational classification – as is so often the case – to the administrative law of the member states. As explained, trusted flaggers themselves exercise sovereignty. There is therefore much to be said for regarding them as agents (detailed reasoning in brief in Struzina/Heller, Hinweisgeber nach dem DSA; NVwZ i.E.), since they are appointed by a state authority (the DSC in the Federal Network Agency) and decide on their own responsibility whether to report content. This report has a duty-inducing, i.e. sovereign, character. Trusted flaggers are therefore not just administrative assistants who support another authority, such as the DSC in its activities. The fact that the entrustment act – i.e. the authorisation of a private body as a trusted flagger – is based on an EU regulation (Art. 22 DSA) does not change the character of the entrustment.
This is because the EU regulation, as directly applicable EU law, is a suitable basis for the enactment of administrative acts and thus also entrustment acts by member states. As trusted flaggers are entrusted with official authority, they are bound by the Basic Law for the Federal Republic of Germany (GG) to the fundamental rights, in particular the fundamental right of freedom of expression, within the scope of their official activities, i.e. the reporting of network content that triggers obligations – and thus de facto also to the case law of the Federal Constitutional Court (BVerfG) on Article 5 (1) GG. In addition, the Freedom of Information Act applies. Even if one were to deny the qualification as a trustee, one would in any case have to state that trusted flaggers are to be assigned to the sovereign administration. If trusted flaggers were to be qualified solely as private reporting offices, we would have an ‘escape into private law’ by which the state would evade its fundamental rights obligations.
Originally published in German on Verfassungsblog
Disclaimer: www.BrusselsReport.eu will under no circumstance be held legally responsible or liable for the content of any article appearing on the website, as only the author of an article is legally responsible for that, also in accordance with the terms of use.