SHARE
Facebook X Pinterest WhatsApp

Deepfake AI Makes Death Threats Frighteningly Real

A Florida judge thought she was watching a clip from a Grand Theft Auto–style video game — until she realized the scene depicted her own murder.

Nov 3, 2025
Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

The era of simple online harassment is over. The gloves are off.

Florida Judge Jennifer Johnson thought she was watching a clip from a Grand Theft Auto–style video game — until she realized the scene depicted her own murder in horrifying detail.

The AI-generated video showed an animated figure stalking a woman before killing her with a hatchet and gun. Over the violence, a voice intoned: *“Judge Johnson, let’s bury the hatchet.” According to The New York Times, it was no game at all.

Technology built to make life easier is now being weaponized to make threats feel chillingly real. From AI-generated blackmail videos to deepfake assassination simulations, criminals are exploiting tools like voice cloning, generative imagery, and synthetic news broadcasts to create a new breed of digital terror — difficult to detect, and harder to dismiss.

Personal nightmare

The attack against Judge Johnson revealed just how sophisticated these digital threats have become. Beyond the simulated killing, the video contained intimate personal details: “my divorce, my remarriage, my name change, my children, where I live and where I work,” she said.

At first, law enforcement brushed it off. It took five months before authorities treated the threat seriously. The perpetrator was eventually convicted and sentenced to 15 years in prison.

But Johnson’s case is far from isolated. A recent security assessment found extremist groups increasingly using AI tools — chatbots, deepfakes, and generative content — to automate disinformation and encourage self-radicalization, CBS News reported. The bulletin warned that as AI-generated content improves, “the line between real and fake blurs,” making fabricated threats seem alarmingly credible.

AI in the wrong hands

Earlier this year, a Tesla Cybertruck exploded outside Trump International Hotel in Las Vegas. Investigators said the attacker had allegedly used ChatGPT to research explosives and plan the incident. Sheriff Kevin McMahill called it “the first case on U.S. soil where ChatGPT was used to help an individual build a device,” describing it as “a concerning moment.”

And that was just the beginning. In May, the FBI documented a coordinated campaign using AI-enhanced smishing (SMS phishing) and vishing (voice phishing) to target government officials. With AI-generated voices that sound convincingly human, criminals can trick victims into sharing sensitive information.

Meanwhile, research from the UK’s Turing Institute warned of a “substantial acceleration in AI-enabled crime.” Criminal networks, it said, are harnessing AI’s power to automate and scale malicious activity while exploiting human psychological weaknesses. Without swift countermeasures, the report concluded, this threat could grow “at an even faster rate in the next five years.”

The scale of the threat

AI is amplifying every element of digital crime. Voice cloning enables realistic fake audio messages; deepfakes fabricate convincing videos; automated data scraping builds personal profiles for harassment or extortion.

A study from the University of Waterloo found that only 61% of people could distinguish AI-generated faces from real ones — proof that even trained eyes are being fooled.

The problem is spreading beyond public figures. Eight months ago, Nisos research identified over 2,200 direct threats against CEOs in a mere five weeks following a corporate murder case, showing how deepfake attacks and AI-driven disinformation now endanger business leaders, too.

Outpaced and underprepared

Law enforcement is struggling to catch up. UK authorities recently admitted that police forces are “not adequately equipped to prevent, disrupt, or investigate AI-enabled crime,” prompting calls for a national AI Crime Taskforce.

In the U.S., the FBI and media outlets have warned of rising cases involving AI-generated voices used in fake emergency calls — a phenomenon known as “synthetic swatting.”

The surge in AI-generated death threats isn’t just an evolution of online harassment; it’s a paradigm shift. As synthetic media grows more convincing and accessible, anyone with a public presence — from judges to journalists — can become a target.

Unless response systems evolve as rapidly as the technology itself, the world may become even more dangerous before it becomes safer.

Recommended for you...

Meta’s $15.9B Tax Bill Triggers Wall Street Doubts
Datamation Staff
Oct 31, 2025
Microsoft-OpenAI Reveal AI Restructure Deal
Datamation Staff
Oct 29, 2025
OpenAI Atlas Browser Security Flaw Lets Hackers Attack
Datamation Staff
Oct 28, 2025
OpenAI Acquires Mac AI Interface Builder Software Applications Incorporated
Datamation Staff
Oct 24, 2025
Datamation Logo

Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.