Sprache ändern: English
Share:

Pirates: Flawed report on artificial intelligence recommends use of error-prone technology in judicial procedures and public spaces

Artificial Intelligence European Parliament Freedom, democracy and transparency Press releases

Today, Members of the European Parliament will vote on a report by the Committee on Legal Affairs (JURI) on artificial intelligence (AI) in the area of civil and military uses (2020/2013(INI)). While the European Pirates recognize efforts in the report to recommend legislation for AI, parts of its  wording is fundamentally flawed and can open the doors for significant human rights violations. Therefore, Pirate MEPs and their Greens/EFA Group will vote against it.

The report suggests face surveillance in public spaces could be made compliant with fundamental rights and avoid discrimination with “safeguards”. In a close vote yesterday, several groups failed to remove this language. In addition, the motion recommends the use of error-prone artificial intelligence in order to speed up judicial procedures.

Patrick Breyer, German Member of the European Parliament for the Pirate Party, comments:

“Having computers recommend decisions to judicial authorities goes against the very idea of the judiciary, which is to ensure an independent case-by-case assessment. AI systems have repeatedly been shown to be biased, discriminatory and prone to errors. They can never be neutral.”

“Likewise, subjecting citizens to permanent and indiscriminate face surveillance and identification is unacceptable for its chilling effect on society. Biometric mass surveillance of public spaces needs to be banned once and for all.”

Marcel Kolaja, Czech Vice-President of the European Parliament, adds:

“One of the significant issues of the report is the failure to not integrate a call for a ban on the use of facial recognition technologies in public spaces. Sadly, it proposes to impose a moratorium on such use only after further assessment. That is simply unacceptable,”

“We know what harm it can do. It can be easily misused against minorities or journalists, which is what dictatorships such as China have already been doing. Such use of artificial intelligence that would put our freedoms in jeopardy has to be stopped.”

The full wording of the controversial paragraphs of the resolution:

56. Invites the Commission to assess the consequences of a moratorium on the use of facial recognition systems, and, depending on the results of this assessment, to consider a moratorium on the use of these systems in public spaces by public authorities and in premises meant for education and healthcare, as well as on the use of facial recognition systems by law enforcement authorities in semi-public spaces such as airports, until the technical standards can be considered fully fundamental rights-compliant, the results derived are non-biased and non-discriminatory, and there are strict safeguards against misuse that ensure the necessity and proportionality of using such technologies.

67. Notes that AI is increasingly being used in the field of justice in order to take decisions which are more rational, more in keeping with the law in force, and quicker; welcomes the fact that the use of AI is expected to speed up judicial proceedings.

Comments

0 Comments

Write a comment:

All information is voluntary. Your email address will not be published.