AI-powered scams and what you can do about them

AI is here to help you, whether you’re composing an email, creating concept art, or scamming vulnerable people into thinking you’re a friend or family member in need. AI is so versatile! But since some people would rather not be scammed, let’s talk about what to look out for.

The last few years have seen a dramatic increase not only in the quality of media generated, from text to audio to images and video, but also in how cheap and easy that media can be created. The same type of tool that helps a concept artist come up with fantasy monsters or spaceships, or helps a non-native speaker improve their business English, can also be used maliciously.

Don't expect the Terminator to knock on your door and sell you a Ponzi scheme. These are the same old scams we've been dealing with for years, but with a generative AI twist that makes them easier, cheaper, or more convincing.

This is by no means a complete list, just a few of the most obvious tricks that AI can amplify. We’ll be sure to add more as they appear in the wild, or any additional steps you can take to protect yourself.

Voice clones of family and friends

Synthetic voices have been around for decades, but only in the last one or two years have advances in technology made it possible to generate a new voice from just a few seconds of audio. That means anyone whose voice has ever been made public — for example in a news report, YouTube video or on social media — is vulnerable to having their voice cloned.

Scammers can and have used this technology to produce convincing fake versions of loved ones or friends. These can of course say anything, but in the service of a scammer they will probably make a voice clip asking for help.

For example, a parent might get a voicemail from an unknown number that sounds like their son, saying that their stuff was stolen while traveling, that someone let them use their phone, and that Mom or Dad can send money to this address, Venmo recipient, company, etc. It's easy to imagine variations involving car trouble (“they won't release my car until someone pays them”), medical issues (“this treatment isn't covered by insurance”), and so on.

This kind of scam has already been done using President Biden's voice! They caught those behind it, but future scammers will be more careful.

How can you combat voice cloning?

First of all, don't bother trying to spot a fake voice. They are getting better every day and there are many ways to cover up quality issues. Even experts get fooled!

Anything coming from an unknown number, email address or account should automatically be considered suspicious. If someone says they are your friend or loved one, contact the person as you normally would. They will probably tell you that they are doing well and that it is (as you already suspected) a scam.

Scammers tend not to take action when ignored, while a family member probably will. It's okay to leave a suspicious message while you think about it.

Personalized phishing and spam via email and messages

We all get spam from time to time, but text-generating AI makes it possible to send mass emails tailored to each individual. Because data breaches occur regularly, much of your personal information has been spread out.

It’s one thing to get one of those “Click here to see your invoice!” scam emails with overtly creepy attachments that seem like they took so little effort. But with even a little context, they suddenly become very believable, using recent locations, purchases, and habits to make it seem like a real person or a real problem. Armed with a few personal facts, a language model can tailor a generic version of these emails to thousands of recipients in seconds.

So what was once “Dear customer, here's your invoice” becomes something like “Hi Doris! I work on Etsy's promotions team. An item you recently looked at is now 50% off! And shipping to your Bellingham address is free if you use this link to claim the discount.” A simple example, but still. With a real name, shopping habit (easy to figure out), general location (ditto) and so on, the message suddenly becomes a lot less obvious.

Ultimately, these are still just spam. But this kind of tailor-made spam once had to be done by poorly paid people on content farms abroad. Now it can be done on a large scale by an LLM with better prose skills than many professional writers!

How can you fight back against email spam?

As with traditional spam, vigilance is your best weapon. But don't expect to be able to distinguish generated text from human-written text in the wild. Few can do any other AI model, and certainly not (despite the claims of some companies and services).

No matter how improved the copy, this type of scam still has the fundamental challenge of making you hesitate to open sketchy attachments or links. As always, unless you’re 100% certain of the authenticity and identity of the sender, don’t click or open anything. If you’re even a little bit unsure — and this is a good idea to cultivate — don’t click, and if you have someone knowledgeable to forward it to for a second set of eyes, do so.

'Fake you' identification and verification fraud

Due to the number of data breaches in recent years (thanks, Equifax!), it's safe to say we all have our fair share of personal data floating around on the dark web. Following good online security practices will mitigate much of the danger from changing your passwords, enabling multi-factor authentication, and so on. But generative AI could pose a new and serious threat in this area.

With so much data about someone available online and for many even a snippet or two of their voice, it is becoming increasingly easier to create an AI persona that sounds like a target persona and has access to many of the facts used to verify identity.

Think about it this way. What would you do if you were having trouble logging in, couldn’t configure your authenticator app properly, or lost your phone? Probably call customer service — and they’d “verify” your identity based on some trivial facts like your date of birth, phone number, or social security number. Even more advanced methods like “taking a selfie” are becoming easier to game.

The customer service representative — as far as we know, also an AI! — may very well oblige this fake you and give him all the privileges you would have if you were actually calling. What they can do from that position varies greatly, but none of it is good!

As with the others on this list, the danger isn’t so much in how realistic this fake you would be, but rather that it’s easy for scammers to pull off these types of attacks on a large scale and repeatedly. Not long ago, these types of impersonation attacks were expensive and time-consuming, and as a result, limited to high-value targets like wealthy individuals and CEOs. Today, you can build a workflow that creates thousands of impersonation agents with minimal oversight, and these agents can autonomously call the customer service numbers of all of a person’s known accounts — or even create new accounts! Only a handful need to be successful to justify the cost of the attack.

How can you combat identity fraud?

Just as it was before AIs came along to amplify the scammers' efforts, “Cybersecurity 101” is your best bet. Your data is already there; you can't put the toothpaste back in the tube. But you can make sure your accounts are adequately protected against the most obvious attacks.

Multi-factor authentication is without a doubt the most important step anyone can take here. Any serious account activity will be routed straight to your phone, and suspicious logins or attempts to change passwords will show up in email. Don't ignore these alerts or mark them as spam, even (especially!) if you receive a lot of them.

AI-generated deepfakes and blackmail

Perhaps the scariest form of emerging AI fraud is the possibility of blackmail with deepfake images of you or a loved one. You can thank the fast-moving world of open image modeling for this futuristic and terrifying prospect! People interested in certain aspects of advanced image generation have created workflows intended not only to render naked bodies, but also to associate them with any face they can take a photo of. I don't need to explain how it is already used.

But one unintended consequence is an expansion of the scam commonly referred to as “revenge porn,” but more accurately defined as the non-consensual distribution of intimate images (though like “deepfake,” it can be hard to supplant the original term). When someone’s private images are released, either through hacking or a vengeful ex, they can be used as blackmail by a third party who threatens to publish them widely unless a sum is paid.

AI amplifies this scam by eliminating the need for actual intimate images in the first place! Anyone's face can be added to an AI-generated body, and while the results aren't always convincing, if it's pixelated, low-resolution, or otherwise partially unclear, it's likely enough to fool you or others. And that's all it takes to scare someone into paying to keep them secret – although, like most blackmail scams, the first payment likely won't be the last.

How can you combat AI-generated deepfakes?

Unfortunately, the world we are moving into is one where fake nudes of almost anyone will be available on demand. It’s creepy and weird and gross, but unfortunately the cat is out of the bag.

No one is happy about this situation, except the bad guys. But there are a few things that are good for us potential victims. It may be small consolation, but these images aren't really yours, and it doesn't take real nudes to prove it. These image models can produce realistic bodies in some ways, but like other generative AI, they only know what they have been trained on. So, for example, the fake images will have no distinguishing features and are likely to be clearly wrong in other ways.

And while the threat will likely never completely disappear, victims have more and more options for recourse. They can legally force image hosts to remove photos or ban scammers from sites where they post photos. As the problem grows, so will the legal and private resources to combat it.

JS is not a lawyer! But if you are a victim of this, please report it to the police. It’s not just a scam, it’s harassment, and while you can’t expect the police to do the kind of deep internet detective work required to track someone down, sometimes these cases get solved, or the scammers are spooked by requests sent to their ISP or forum host.

Related Posts

Can grocery-packing robots make the self-checkout process less tedious?

Over the past decade or so, grocery shoppers at stores large and small have become accustomed to automated self-checkout systems that reduce the number of human cashiers needed to scan…

YouTube responds again to NSFW ad issue

YouTube has been plagued by NSFW ad issues for a while now, but the problem seems to be never-ending. Late last year, there were several reports of YouTube users seeing…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

Media mogul Barry Diller is considering a bid to gain control of Paramount

  • July 2, 2024
Media mogul Barry Diller is considering a bid to gain control of Paramount

If USA Soccer moves on from Gregg Berhalter, who should be the candidates to replace him?

  • July 2, 2024
If USA Soccer moves on from Gregg Berhalter, who should be the candidates to replace him?

Trump's hush money sentencing postponed until September

  • July 2, 2024
Trump's hush money sentencing postponed until September

House Democrats Call on Major Pharmacy Chains to Distribute Mifepristone

  • July 2, 2024
House Democrats Call on Major Pharmacy Chains to Distribute Mifepristone

Can grocery-packing robots make the self-checkout process less tedious?

  • July 2, 2024
Can grocery-packing robots make the self-checkout process less tedious?

FTC challenges Tempur Sealy's $4 billion acquisition of Mattress Firm

  • July 2, 2024
FTC challenges Tempur Sealy's $4 billion acquisition of Mattress Firm

FDA bans soda additive over health risks

  • July 2, 2024
FDA bans soda additive over health risks

New federal rule could protect millions of workers from extreme heat: NPR

  • July 2, 2024

FDA Approves New Drug for Alzheimer's Disease: 'Meaningful Results'

  • July 2, 2024
FDA Approves New Drug for Alzheimer's Disease: 'Meaningful Results'

Alabama woman bitten by rabid fox while unloading groceries from car: report

  • July 2, 2024
Alabama woman bitten by rabid fox while unloading groceries from car: report

YouTube responds again to NSFW ad issue

  • July 2, 2024
YouTube responds again to NSFW ad issue