The indiscriminate and error-prone automated screening of private emails and messages in search of child pornography and grooming by US corporations such as Facebook, Google and Microsoft is partly illegal. This is the finding of the European Parliamentary Research Service (EPRS) in an expert assessment published yesterday. EU plans to legalise and mandate the controversial blanket messaging and chat control are therefore on the brink of collapse.
The parliamentary experts conclude that automated and indiscriminate scanning of all communications content was disproportionate and violates fundamental rights. Using “artificial intelligence” to search for the solicitation of contact with minors (grooming) or child pornography is only permissible if the screening is limited to suspects (page 47 of the report). The same applies to the search for unknown child pornography with the help of “machine learning”, as practised by the organisation “Safer”, founded by US actor Ashton Kutcher, according to a supplementary statement by the author of the study. Limiting the content screening to suspects has so far been rejected by the EU Commission, the European Parliament and the EU Council in the ongoing trilogue negotiations, so that the planned ePrivacy derogation would likely be annulled in court.
The Research Service also states that suspicious communications content and user data may only be passed on to states with an adequate level of data protection (page 56 of the report). Since the US lacks an adequate level of data protection according to the “Safe Harbour” decision of the European Court of Justice, the practice of US corporations to disclose sensitive communications with US-based organisation NCMEC is illegal. The Irish Data Protection Authority is investigating a complaint by MEP and shadow rapporteur Patrick Breyer (Pirate Party, Greens/European Free Alliance Group) against the indiscriminate searching and disclosing of private messages by Facebook and Google.
“Now it is officially confirmed: The planned indiscriminate message and chat control will fail in court and do nothing to protect children. It is even counterproductive, because it pushes perpetrators into secure communications channels and thus makes their prosecution more difficult or impossible. According to the police, the indiscrimination machines report innocent users in 90% of the cases, often criminalizing minors. Instead of propagating mass surveillance, children must be protected online and offline by better prevention, public education, therapy and support services, and strengthening of law enforcement capacities!” Patrick Breyer comments.
In 2020, the European Commission presented a legislative proposal that would allow providers to use error-prone technology to search all private chats, video conferences, messages and emails in a fully automated manner, without suspicion, for allegedly illegal depictions of minors and attempts to initiate contact with minors. If an algorithm reports a suspicious case, all message contents and contact data are automatically forwarded to a private distribution centre and on to police authorities worldwide without human verification. The users concerned are not notified. No judge needs to order this search. Trilogue negotiations on the controversial proposal are ongoing. The EU Commission announced follow-up legislation to make the error-prone content screening mandatory for all communications service providers.
The European Court of Justice decided last year that „the particularly serious interference constituted by the automated analysis of [communications] data can meet the requirement of proportionality only in situations in which a Member State is facing a serious threat to national security“ (case C-511/18 et al, par. 177).