Sprache ändern: English
Share:

EU chat control negotiations kick off: Commissioner Johansson defends controversial surveillance law before LIBE committee

European Parliament Freedom, democracy and transparency Press releases

Today from 3-4 p.m., EU Home Affairs Commissioner Ylva Johansson will officially present her controversial draft law on mandatory message and chat control to the EU Parliament’s Civil Liberties Committee (LIBE) and answer MEPs’ questions. With the law, the Commission wants to oblige providers of email, chat and messenger services, among others, to search for suspicious messages in a fully automated way and forward them to the police, citing the of prosecuting “child pornography” as reason. End-to-end encryption would have to be thwarted by scans on all cell phones.

MEP and civil rights activist Dr. Patrick Breyer (Pirate Party), who is co-negotiating the law as shadow rapporteur for Group Greens/European Free Alliance, comments:

“An indiscriminate search into the blue which violates fundamental rights is the wrong way to protect young people and even endangers them by putting their private recordings in the wrong hands and criminalizing children in many cases. Victim protection largely depends on focusing all resources on preventing the abuse and the production of abusive material. Child porn rings do not organize themselves on Facebook or Whatsapp but use self-operated forums and encrypted archives, which automated chat control completely misses. Children have a right to protection from political instrumentalization!

In addition to ineffective network blocking censorship, chat control threatens the end of digital privacy of correspondence and secure encryption, the screening of personal cloud memories threatens mass surveillance of private photos, age verification threatens the end of anonymous communication, appstore censorship threatens the end of secure messenger apps and the paternalism of young people. What is not planned, however, is a long overdue obligation to delete known abuse material on the Internet or the urgently needed Europe-wide standards for effective prevention measures, victim assistance and counseling, and consistent criminal investigations.

Chat control is like the post office opening and scanning all letters – ineffective and illegal. Even the most intimate nude photos and sex chats can suddenly end up with company personnel or the police because of error-prone algorithms. Those who destroy the digital secrecy of correspondence destroy trust. We all depend on the security and confidentiality of private communication: People in need, victims of abuse, children, the economy and also state authorities.”

Debunking myths

When the draft law on chat control was first presented in May 2022, the EU Commission promoted the controversial plan with various arguments. In the following, various claims are questioned and debunked:

1. “Today, photos and videos depicting child sexual abuse are massively circulated on the Internet. In 2021, 29 million cases were reported to the U.S. National Centre for Missing and Exploited Children.”

To speak exclusively of depictions of child sexual abuse in the context of chat control is misleading. To be sure, child sexual exploitation material (CSEM) is often footage of sexual violence against minors (child sexual abuse material, CSAM). However, an international working group of child protection institutions points out that criminal material also includes recordings of sexual acts or of sexual organs of minors in which no violence is used or no other person is involved. Recordings made in everyday situations are also mentioned, such as a family picture of a girl in a bikini or naked in her mother’s boots. Recordings made or shared without the knowledge of the minor are also covered. Punishable CSEM also includes comics, drawings, manga/anime, and computer-generated depictions of fictional minors. Finally, criminal depictions also include self-made sexual recordings of minors, for example, for forwarding to partners of the same age (“sexting”). The study therefore proposes the term “depictions of sexual exploitation” of minors as a correct description. In this context, recordings of children (up to 14 years of age) and adolescents (up to 18 years of age) are equally punishable.

2. “In 2021 alone, 85 million images and videos of child sexual abuse were reported worldwide.”

There are many misleading claims circulating about how to quantify the extent of sexually exploitative images of minors (CSEM). The figure the EU Commission uses to defend its plans comes from the U.S. nongovernmental organization NCMEC (National Center for Missing and Exploited Children) and includes duplicates because CSEM is shared multiple times and often not deleted. Excluding duplicates, 22 million unique recordings remain of the 85 million reported.

75 percent of all NCMEC reports from 2021 came from Meta (FB, Instagram, Whatsapp). Facebook’s own internal analysis says that “more than 90 percent of [CSEM on Facebook in 2020] was identical or visually similar to previously reported content. And copies of just six videos were responsible for more than half of the child-exploitative content.” So the NCMEC’s much-cited numbers don’t really describe the extent of online recordings of sexual violence against children. Rather, they describe how often Facebook discovers copies of footage it already knows about. That, too, is relevant.

Not all unique recordings reported to NCMEC show violence against children. The 85 million depictions reported by NCMEC also include consensual sexting, for example. The number of depictions of abuse reported to NCMEC in 2021 was 1.4 million.

7% of NCMEC’s global SARs go to the European Union.

Moreover, even on Facebook, where chat control has long been used voluntarily, the numbers for the dissemination of abusive material continue to rise. Chat control is thus not a solution.

Source

3. “64% increase in reports of confirmed child sexual abuse in 2021 compared to the previous year.”

That the voluntary chat control algorithms of large U.S. providers have reported more CSEM does not indicate how the amount of CSEM has evolved overall. The very configuration of the algorithms has a large impact on the number of SARs. Moreover, the increase shows that the circulation of CSEM cannot be controlled by means of chat control.

4. “Europe is the global hub for most of the material.”

7% of global NCMEC SARs go to the European Union. Incidentally, European law enforcement agencies such as Europol and BKA knowingly do not report abusive material to storage services for removal, so the amount of material stored here cannot decrease.

5. “A Europol-backed investigation based on a report from an online service provider led to the rescue of 146 children worldwide, with over 100 suspects identified across the EU.”

The report was made by a cloud storage provider, not a communications service provider. To screen cloud storage, it is not necessary to mandate the monitoring of everyone’s communications. If you want to catch the perpetrators of online crimes related to child abuse material, you should use so-called honeypots and other methods that do not require monitoring the communications of the entire population.

6. “Existing means of detecting relevant content will no longer be available when the current interim solution expires.”

Hosting service providers (filehosters, clouds) and social media providers will be allowed to continue scanning after the ePrivacy exemption expires. For providers of communications services, the exemption regulation for voluntary chat scanning could be extended without requiring all providers to scan.

7. metaphors: Chat control is “like a spam filter” / “like a magnet looking for a needle in a haystack: the magnet doesn’t see the hay.” / “like a police dog sniffing out letters: it has no idea what’s inside.” The content of your communication will not be seen by anyone if there is no hit. “Detection for cybersecurity purposes is already taking place, such as the detection of links in WhatsApp” or spam filters.

Malware and spam filters do not disclose the content of private communications to third parties and do not result in innocent people being flagged. They do not result in the removal or long-term blocking of profiles in social media or online services.

8. “As far as the detection of new abusive material on the net is concerned, the hit rate is well over 90 %. … Some existing grooming detection technologies (such as Microsoft’s) have an “accuracy rate” of 88%, before human review.”

With the unmanageable number of messages, even a small error rate results in countless false positives that can far exceed the number of correct messages. Even with a 99% hit rate, this would mean that of the 100 billion messages sent daily via Whatsapp alone, 1 billion (i.e., 1,000,000,000) false positives would need to be verified. And that’s every day and only on a single platform. The “human review burden” on law enforcement would be immense, while the backlog and resource overload are already working against them.

Separately, an FOI request from former MEP Felix Reda exposed the fact that these claims about accuracy come from industry – from those who have a vested interest in these claims because they want to sell you detection technology (Thorn, Microsoft). They refuse to submit their technology to independent testing, and we should not take their claims at face value.

The EU’s evaluation of child abuse detection tools is based solely on industry claims taken at face value.