
Remember when Minority Report was just a movie? Floating psychic bald people in a tank predicting murder before it happened? That was peak 2002 dystopia. It was fun because it wasn’t real. But now?
Now it’s real.
Not in theory. Not in testing. In operation.
The UK is quietly testing a government program—piloted as early as 2021—designed to forecast who might commit homicide before they do anything wrong. It’s not sci-fi. It’s not theory. It’s operational. And almost nobody’s talking about it.
The system was originally called the Homicide Prediction Project but was quietly rebranded as something far more boring: Sharing Data to Improve Risk Assessment. Sounds like a conference handout. But don’t let the name fool you.
It pulls data from criminal justice sources—police files, probation records, court documents—and combines it with personal details like age, race, addiction history, mental health status, disabilities, and more. It runs the numbers and flags individuals considered high risk for future violent offenses. Think of it like a credit score for your future crimes, except you don’t know it exists, and you can’t check your score.
Let’s say you’re 29. You got into a fight when you were 19. You’ve got depression on your old medical record. You live in a low-income neighborhood. One day, your probation officer starts showing up more. You get denied for a job you were qualified for. You don’t know it yet, but your name’s sitting on a list—flagged as high risk. You didn’t commit a crime. You just fit the pattern. According to a 2023 investigation by The Guardian, several individuals flagged by the pilot system were unaware until their parole restrictions changed or access to housing and services quietly tightened. One case involved a man with a nonviolent record who was flagged due to a combination of prior arrests, medical disclosures, and geographic profiling. No notification. No appeal.
The model works by scanning old case files and finding traits common among convicted murderers. From there, it creates a threat profile and quietly watches for new matches. The system is totally opaque. There’s no way to appeal if you’re flagged. No way to see how or why you were flagged. And critics warn the consequences could already be underway—flagged individuals may be facing increased monitoring, restricted services, or decisions being made about them behind closed doors.
This wasn’t debated in Parliament. It wasn’t voted on by the public. It was initiated by the Ministry of Justice under Justice Secretary Dominic Raab, tested in Greater Manchester, and appears to be advancing under the radar. According to procurement documents released in 2022, the project involved partnerships with private data analytics firms including Palantir and Faculty AI—two companies with prior contracts tied to surveillance, immigration enforcement, and predictive analytics. Much of their involvement is buried under sealed data agreements and redacted security justifications. In other words, you don’t get to know what you’re accused of—or who built the thing that accused you.

Civil liberties groups like Amnesty International and Statewatch have already warned that systems like this are built on biased foundations—over-policing, structural inequality, flawed data—and will inevitably reinforce the same injustices they claim to prevent. Amnesty called it “a dangerous blend of secrecy and data-driven discrimination” in their 2023 report Trained to Discriminate, which warned that predictive policing tools risk automating existing patterns of racial and economic injustice. It’s not justice. It’s judgment without trial.
This isn’t just a UK problem. In the U.S., Chicago’s infamous heat list tried something similar—flagging individuals who hadn’t committed crimes but were statistically “likely to.” It collapsed under legal and public pressure. The LAPD ran predictive policing pilots that were scrapped for amplifying racial bias. Facial recognition software, social credit scores, AI-powered risk assessments—these aren’t edge cases anymore. They’re beta tests. And we’re the test subjects. The European Centre for Digital Rights warns that without strict oversight, these systems could violate protections under Article 8 of the European Convention on Human Rights—rights to privacy, family life, and due process. In the U.S., critics have compared them to silent violations of Fourth Amendment protections against unreasonable surveillance.
You don’t have to break the law. You just have to look like someone who might.
And here’s the worst part: you won’t know when you’ve been flagged. You won’t get a notice in the mail. There’s no warning light, no form to sign. You’ll just feel it. A job application that goes nowhere. A sudden police visit. A system that starts closing doors behind you before you even reach them.
We already live in a world where your browser history can cost you a loan. Where insurance rates change based on how often you check your phone while driving. Where the algorithm curates what you see, who you meet, what you buy, and who gets hired. This isn’t the beginning of predictive control. It’s the middle.
We used to be afraid of machines taking over. But this isn’t the robots rising up. This is the people who trained them, hiding behind algorithms, pulling levers without accountability. They’ve just handed the job of profiling to code. And the worst part? That code doesn’t get tired, doesn’t forget, and never forgives.
I joke a lot in these posts. This isn’t one of those times.
This is how society shifts from freedom to control—quietly, through boring names and bland interfaces. You won’t see it on a billboard. But one day, you’ll try to move, and something invisible will say no.
You won’t even know what crime you almost committed. You’ll just wake up in a different world. And you won’t remember when it changed. And the next time there’s a mass attack or political pressure for a crackdown, they’ll ask why no one flagged the suspect. That’s when the system won’t just exist quietly in the background. It’ll go loud. And it won’t stop with suspects.
Talk about this. Ask questions. Find out who built the system and who gave it permission to exist. Because if you don’t? It will keep learning. And one day, it might learn your name. Because one day, it won’t just be about criminals. It’ll be about control. And by then, it’ll be too late to ask questions.
About the Creator
MJ Carson
Midwest-based writer rebuilding after a platform wipe. I cover internet trends, creator culture, and the digital noise that actually matters. This is Plugged In—where the signal cuts through the static.



Comments
There are no comments for this story
Be the first to respond and start the conversation.