From phishing to romance scams to investment fraud, the most popular cybercrimes today reflect scams that have been running for decades. With the advancement of artificial intelligence, criminals have new technology on hand to make these scams easier to execute and tougher to detect.
The emergence and evolution of artificial intelligence has taken the world by storm. It has been rapidly adopted by individuals, businesses, governments and educational institutions. And, by fraudsters. While this emerging technology has many potential benefits such as summarizing data and enhancing customer service, its drawbacks include its use to advance, enhance and expand cyberattacks.
Here are some of the most commons scams affecting individuals and businesses today, and the effect of AI on their effectiveness and impact.
Social engineering attacks
Social engineering is the art of using psychological manipulation to trick people into making security mistakes or giving away confidential information. Social engineering tactics are used in the most common and successful scams, including phishing, romance and investment scams, covered here. Preying on human nature’s inclination to trust, social engineering has always been one of the most common and effective ways a fraudster can gain access to sensitive information. When they layer on the power of AI, scams become even more believable and successful. Here’s how:
-
Human-like interaction: Artificial intelligence can imitate the tone, style and content of genuine communication, such as an email, phone call, chatbot or even a video call.
-
Deep fakes: AI can impersonate a human with high accuracy and trick people into revealing sensitive information, carrying out financial transactions or spreading misinformation.
-
Voice synthesis: Cyber criminals can gather audio data from people sourced from online interviews, customer service calls or videos on social media sites. Using AI-powered voice synthesis tools, they can accurately mimic the tone, cadence and pronunciation of real people.
Phishing scams
Phishing is a common scam designed to trick individuals into disclosing personal or financial information for the purpose of financial fraud or identity theft. A typical phishing email will give its recipient a fake reason, such as a security breach or a contest, to make them click a link, fill out a form or share sensitive details about themselves or their accounts.
Previously, phishing emails were easier to identify, if you knew the signs. Grammatical errors and unnatural language were clear giveaways that an email that appeared to be sent from a legitimate organization, was in fact a fake. With advancements in AI technology, including tools like ChatGPT, it’s not as easy to identify these red flags.
Using AI, cybercriminals can create more in-depth, grammatically correct emails, making them harder to detect as fraudulent. Moreover, by pulling information about their target from a range of social media platforms, as well as stolen details they may have acquired through a data breach, fraudsters can also create content that is more targeted and personalized, making it look that much more authentic.
In addition to the quality of the scam, AI has made it easier for fraudsters to increase the quantity of scams. With AI’s ability to scan the internet in seconds for data, fraudsters can cast a wider net for their phishing scams, but with more specific details about their targets.
Romance scams
Romance scams have become increasingly prevalent and effective as AI has become more sophisticated and accessible. According to a Norton report, online dating scams increased 72% in the past year, with nearly a third of those targeted been catfished by fake personas (catfishing is when someone uses a fake identity to trick their target into believing they’re in a real online friendship or romance with them).
Using AI, romance scammers can:
-
Generate compelling messages that sound like genuine conversation
-
Use face-swapping technology and voice simulators in video calls, making their target believe they are speaking with a real person
-
Use AI to quickly scan the internet about their targets, enabling them to tailor their messages to their interests and situation
-
Create entirely new identities with AI software, complete with a voice, face and accent
Investment scams
A recent experiment carried out by the Ontario Securities Commission (OSC) revealed that AI-enhanced scams pose significantly more risk to investors compared to conventional scams, again using techniques that convince investors into believing something that simply isn’t true. In the investing world, AI can be used to:
-
Rapidly spread fake information about an investment opportunity
-
Create a fake investment opportunity
-
Produce deceptive telemarketing through voice cloning technology
-
Impersonate celebrities to create fake endorsements
-
Create frenzied situations that lead to misinformed, emotionally driven or impulsive investing decisions
What’s more, as AI’s capabilities have gained wider recognition among fraudsters, there has been a rise in investment scams promoting AI-driven trading systems that claim to “never lose” or offer “guaranteed returns.” These schemes have deceived investors and led to significant financial losses.
Grandparent scam
The grandparent scam was one of the top 10 frauds based on dollar loss in 2023, scamming Canadians out of more than $11 million. The scam involves a fraudster who poses as a loved one – typically a grandchild – who claims to be hurt or in trouble, and in need of money immediately.
Because scammers can now use AI to analyze a small clip of audio and clone a person’s voice, accent and intonation, they can call a loved one and genuinely sound like a grandchild in trouble, making this scam even more effective.
How to protect yourself
The availability, adaptability and evolution of AI has made it easier for fraudsters to trick targets out of money and information. Some of the tried-and-true defense strategies no longer apply, which means people and businesses must find new ways to protect themselves. Here are some tips:
-
Look for inconsistencies in tone: Whether in an email, phone call or video call, stay alert for phrases that don’t follow the tone of the conversation.
-
Watch movements on a video call: If the “person” on the other end of the call is holding themselves particularly still, it could be a sign that they’re using face-swapping technology that doesn’t hold up when they move or turn
-
Keep family and friends up to date: If using a dating site, keep others in the loop about the people you’re connecting with – especially if a conversation or request seems strange. They may have better perspective to spot a red flag
-
Always remain skeptical when faced with an out-of-ordinary request, particularly when it comes to divulging information or sending money
The good news is that while AI is being used by fraudsters to enhance their tactics, it is also being used in defense against scams. Cyber security companies are using AI to identify and thwart attacks more quickly and more widely, and some dating sites are using AI moderators that can effectively spot fake personas. Remember, reporting cyber threats, scams or attempts plays an important role in helping to defend against them – if you come across a cyber scam, saying something can help protect yourself and others in the future.
It is critical that we all become more Cyber Aware and safeguard our online activities. Visit Be Cyber Aware for more tips.
Stay informed about any new or ongoing scams by checking RBC Current Scam Alerts.