Published September 19, 2024 • 5 Min Read
Artificial Intelligence has been used for many things in recent years. Unfortunately, scammers have found ways to harness this technology to advance their scams and target more people. Find out more about AI-powered scams, how to spot them and how to protect against them.
To put it mildly, Artificial Intelligence (AI) has caused quite a stir in recent years. It has made headlines for the ways it can help make day-to-day life easier (correcting grammar, self-parking cars, facial recognition) and the dangers that some predict it will bring about. It has also helped fraudsters take common scams up a level of sophistication and reach.
Here are five ways that scammers have used new technology to power existing scams:
Write more convincing phishing emails and text messages
Bad grammar and spelling errors used to be immediate red flags that an email, text or website was fake. After all, legitimate companies have professional writers, marketing departments and proofreaders to ensure that language is correct and punctuation in line with their brand. Too many exclamation points and awkward phrasing were signals that messages weren’t coming from trustworthy sources.
With AI-powered technology, however, messages can easily be error-free. Scammers can also use AI to pull information from you online to personalize messages, making them even more convincing.
Create deepfakes of public figures
With the help of AI, scammers can clone the voices of public figures (celebrities, government figures, prominent executives) and try to trick people with fake calls or advertising. The “figure” may try to convince you to make a certain investment or donate to a worthy cause. Because people often trust endorsements from well-known and respected individuals, these deepfakes can be very successful in scamming people out of their money.
Impersonate a friend or relative
One of the most common and successful scams over the last few years is the grandparent scam. It involves a fraudster who poses as a loved one, typically a grandchild, who calls to say they’re in some kind of trouble and need money immediately.
Some AI tools can take a short clip of someone speaking and then recreate their voice. Using this voice, they can make a call, pretend to be the grandchild (or other loved one) and ask for money to pay a ransom, bail or fare for a plane ticket home.
Impersonate an employer
Deepfake attacks have surged over the past year as AI voice and video clones have fooled companies across a range of sectors.
Recently, scammers pulled off a complex and highly lucrative scam that deceived an employee into transferring $38.8 million from the company’s coffers. They used cutting-edge technology to impersonate one of the firm’s high-ranking executives, luring the employee into a fake video conference. The “people” he interacted with were all artificial creations generated by deepfake technology.
Create fake websites
Scammers can use AI to create fake websites, then send out links via email or social media. A fake online store might offer a hard-to-find item at a discount, and/or have limited-time sales that rush you into making a purchase. When you enter your payment information, these scammers can steal your information and use it to make fraudulent purchases.
How to protect yourself against AI-powered scams
With such sophisticated technology, how can you protect yourself? The answer is to go back to the basics:
-
If a message or request sounds suspicious, reach out to the individual directly using a different, proven method. For instance, pick up the phone and call a family member who claims to be in trouble.
-
Don’t fall for the “keep it secret” trick. Many fraudsters are successful because they convince their victims to keep a transaction or interaction from others. In the grandparent scam, for instance, the alleged “grandchild” will plead with the grandparent not to call their parents about their situation. In scams targeting employees, the fraudster who poses as a senior executive may insist the transaction must be kept confidential. Any message that calls for urgency and secrecy is a bright red flag.
-
Do your own research. If a celebrity is endorsing a charity or investment, don’t take it at face value. If you’re interested in the charity or fund in question, research it separately and give through trusted means.
-
Trust your instincts. In the case of the finance employee who got duped by a fake video, his first thought was that something felt off. But, he put that feeling aside when he saw the “CEO” in the video conference. Before making a transaction, sending money or giving up information, be sure to double check the request through another source.
-
Look for words and images that seem too perfect. While it can be difficult to spot an AI-generated image or AI-generated text, the details can give it away. For instance, if a model’s hair and skin look too smooth or an image of a food item looks glossy and perfect, it could be a sign of a fake website. Also, keep in mind that even the best writers don’t use perfect grammar – they may make occasional mistakes or use inconsistencies to emphasize a point. If the writing is completely error-free, it could signal a fake site.
With the ability to create – or recreate – an image, video or voice – AI opens up new opportunities for scammers to target and trick their victims and make existing scams even more believable.
To protect yourself, remember to trust your instincts, double check the source of the message and avoid rushing into any kind of transaction. Taking a step back and assessing any unusual situation can often help avoid becoming a victim of fraud. Even AI-powered fraud.
Another way to protect yourself is to stay informed about the latest scams. The RBC Scam Alerts page is a helpful resource to check regularly as it updates with current scams affecting RBC clients.
This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.
Share This Article