Member-only story

Experts Rank 20 Ways AI Enables Criminals

Onyedikachukwu Czar
5 min readMay 16, 2023
Image Credit: Imaginima on iStock

One core feature all AIs share is learning with use. OpenAI, for example, collects data from ChatGPT users which they use to train the chatbot further. Google does the same with Bard.

We marvel at the capacity of AI to learn, so much that we tend to forget that releasing an incomplete product to the public, in the face of dangerous side effects, goes against known safety standards.

In medicine, drugs are rigorously tested and certified as safe, first, before they’re released to the public, but AIs are released to the public first, unsafe or not, as a way of making them safe and better.

Medical findings are tested on lab rats, but we’ve become guinea pigs for AIs.

As expected, while these machines get fine-tuned, with their edges made sharper, we, the tools that sharpens them, suffer numerous side effects.

What type of risk(s) are we being exposed to?

The answer is contained in a 2020 publication in the British Journal Crime Science titled “AI-enabled Future Crimes.” In it, top thinkers in the field of computer science, crime science, public policy, and security gathered to outline AI criminal threats.

They used a framework to gauge the severity of these threats to the society using four parameters:

--

--

Onyedikachukwu Czar
Onyedikachukwu Czar

Written by Onyedikachukwu Czar

I write: AI | Personal finance & growth | Tech. I sieve the noise and then share with you everything that's left.

No responses yet