UK Going Full ‘Minority Report’: Using AI to Intervene Before Future Crimes Take Place
A recent report from New Scientist revealed a chilling decision by UK police to utilize AI in order to identify crimes before they happen; spinning it as a way to offer “counseling” to those having a high percentage of committing a crime as supposedly revealed in the data gathered.
One of the ways the idea is being put out there in order to gain acceptance, is it will focus on predicting “serious violent crime using artificial intelligence.”
This is obviously manipulative because it provides a sense of benign big brother watching over those that could be victims of violent crime. After all, who wants violent people running the streets looking for potential victims?
The huge problem here, as history confirms, is intervention of any type coming from the government never ends up generating the positive outcome asserted or expected.
Called the National Data Analytics Solution (NDAS), the system will use gathered statistics and AI to determine the potential risk of individuals to engage in violent crimes. The other thing being proferred is it will also be an aid to potential victims of trafficking and/or modern slavery.
What’s obvious is the police are using the worst case scenarios in record to victimhood in order to chip away at resistance and gain public acceptance. I think it’s obvious where it goes from there: it’ll expand to all types of crime categories as determined by the police. That would probably include being Christians, conservatives, of the hard right, or libertarian.
The point is it would eventually be used as a tool against those that oppose things like globalism, fake climate change, and fake news organizations that parrot the politically correct line approved by globalist-supporting governments.
Intervention
Beyond privacy concerns, where it gets concerning is in regard to intervention. Imagine the mischief authorities can engage in to disrupt the lives of people that have various ideologies they don’t approve of, but haven’t engaged in violent acts. I’m not talking about using it against Islam or terrorism here, I’m talking about using this type of technology as a means of shutting down free speech and resistance to different forms of totalitarism or actions taken by big brother that exceed their authority or purpose.
In other words, a regular or healthy person expressing their disagreement with the cultural worldview of others, being targeted as a potential risk of engaging in violence against other people, for the purpose of intimidating them into silence.
As for “counseling,” that is just another word for interrogation, under the guise of trying to help people. It would certainly bypass legal rights because it’s transferred from the legal system to counseling, which gives the appearance of offering or providing help.
Beyond the potential to use this against those not agreeing with the political views of the left, or their self-defined hate speech, it also, at the individual level, could trigger those not thinking of engaging in a crime into doing so as a result of harrassment as a result of this initiative.
Think of someone trying to live their lives and deal with everyday pressures and responsibilities, now being told they should participate in some type of counseling because AI has determined they’re a potentially violent threat to society.
The other reason I’m extremely skeptical about this is police already have the type of data that has the history of criminals. I see nothing that adds to this that already isn’t available to law enforcement. That suggests to me there is something else being this. That something is probably to intimidate and restrict the freedoms of people not aligning with the official narrative.
Conclusion
There is and never will be a system that can predict these types of behaviors, beyond the already-existant ability to look at criminal activities that have already committed, and drawing the conclusion that many of these people have a strong probability of doing so again.
I’ve had many years of writing about investing in publicly traded companies, and gathering data has always been a key part of making determinations concerning the future potential of the performance of these firms. After years of streamlining the programs used to extrapolate the data, nothing exists that comes close to ensuring how the companies will perform.
So the idea that AI and data gathered from various sources can do that with individual human beings, is totally arrogant.
Finally, there is no way of measuring the impact or effect a program like this would actually have on crime. How do you know whether or not the so-called intervention is what may keep someone from engaging in violent activities, or they went through a life change outside of the program that caused them to change their lives – such as turning from their old ways and putting faith in Jesus Christ?
My conclusion at this time is this is a political tool that will be used to prime public acceptance under the idea of protecting people from violence or being a victim or trafficking. After the pubic is groomed to accept it, from there it’ll expand to its real purpose of targeting those that oppose the left and its SJW, globalist masters.
As for what to do about it, we must keep on bringing it to light and letting people see where it will eventually and inevitably lead to: the loss of freedom at levels unheard of in human history.