The EU built a system called CounterR that essentially performs pre-crime thought surveillance. The TLDR is that an AI company, with direct input from half a dozen European police forces, built a tool that scrapes social media, forums, and other sources to assign citizens a score based on what they think as opposed to what they’ve actually done. The EC also has not released details of the project..

The report itself acknowledges that this sort of automated system “can trigger new fundamental rights risks that affect rights different than the protection of personal data and privacy.”

The European Commission’s White Paper on Al observes that Al-related processing of personal data can trigger new fundamental rights risks that affect rights different than the protection of personal data and privacy, such as the right to freedom of expression, and political freedoms - in particular when Al is used by online intermediaries to prioritise information and for content moderation.

The police were active co-developers, sitting in meetings to define the criteria and feeding real, anonymized data from their investigations to train the LLM. So now you have a feedback loop where police define the threat, the LLM learns it, and the police validate the results, with zero external oversight.

And of course, it’s all shrouded in secrecy. The whole thing is confidential, the source code is proprietary so even partners can’t audit it, and the ethics board is made up of the same people building the thing. There’s no clear requirement to track false positives, so you could be flagged as a potential radical and never know why.

Regarding transparency of funded research, it must be noted that generally research proposals foresee Confidentiality of some results is often necessary, especially in the realm of security.

The cherry on top? The core technology, developed with public funds, was recently acquired by a private company, Logically, who can now sell this dystopian scoring system to whoever they want.

The citizens of the EU literally paid to build our own panopticon. The whole project is about normalizing the idea that the state gets to algorithmically monitor and judge your political beliefs before you ever commit a crime.