Sift’s latest Digital Trust and Safety Index describes how artificial intelligence (AI) is fuelling a fraud surge that will challenge retailers and financial institutions. It’s behind a more-than-fourfold increase in the account hacking attempt rate in the first quarter of 2023 compared to all of 2022.
You’re not alone if you feel subject to increased spam and scam attempts lately. Nearly 70% of American consumers have reported the same since November 2022. Yet half (49%) say it is also tougher to identify scams. Almost 20% of us believe we have been successfully phished in the last six months, with a similar number believing they have been victimized by account takeover or payment fraud.
Credit the advent of generative AI tools quickly leveraged by fraudsters, Sift trust and safety architect Brittany Allen said. That combination of factors led to a 427% increase in the rate of account hacking attempts in the first quarter. In 1Q23, Sift blocked 40% more fraudulent content from its network than in all of 2022. That year, consumers reported losing $2.6 billion to impostor scams, with social media and phone calls producing the highest damage.
E-commerce merchants would kill to only lose $2.6 billion to fraud. Global e-commerce fraud loss is projected to grow 16% by the end of 2023 to reach $48 billion. Between 2023-2027 the cumulative online payment fraud loss is expected to top $343 billion.
How AI helps scammers
AI fosters a deadly combination of automation and higher accuracy, Allen said. Fraudsters can create intelligent communication in multiple languages, removing one of the tell-tale scam signs. Bots can vary text, so one actor can spin multiple unique-looking accounts on a single platform.
Combine AI and automation enhancements with the deep web, and you have a scamming university available with a few keystrokes. That has led to the advent of organized Fraud-as-a-Service businesses, where the more experienced are selling bot systems and other tools to new entrants.
“There’s still a lot of interest on the generative AI side because it allows the fraudsters to conduct scams in languages they are not fluent in, and to do so in an effective way to scale and to write more business appropriate messages,” Allen said. “A fraudster might not have the skill to write what seems like an email by a CEO, but generative AI can do that for them.”
With AI providing a solid quality assurance tool, scammers can focus on pumping the message out with an alarming increase in frequency. Automation enables them to flood systems with vulnerability tests. The volume is high, and even if a scant few hit, it’s well worth the (automated) effort.
That has improved their creativity. Allen has been added to groups on productivity site Atlassian. The scammers opened tickets that were forwarded from the actual Atlassian site but contained fraudulent links. Others deploy chatbots to get one-time passwords that can access someone’s account.
Two ways it affects merchants and FIs
There are two profound implications for business. Much of the fraud occurs off their platforms and on social media and text. Their focus has to be on increasing customer awareness once potential fraud indicators are detected. The best they can do is to prevent damage by introducing small friction sources that give a potential victim some pause. Sift helps by analyzing user journeys for aberrant behavior patterns or suggestions that someone may be about to be victimized.
“Maybe a future state could be where the information is shared more readily between organizations, and you would have a heads up on some of these scams,” Allen said. “But there’s still a lot of a siloed approach, unfortunately, for a lot of our customers dealing with this kind of fraud.”
As if they don’t have enough to worry about, e-commerce merchants and financial institutions should prepare for being on the hook for payment fraud that isn’t their fault. More than half of consumers (54%) believe they shouldn’t be held responsible if they were scammed for payment information and it was used for fraudulent purposes. Some of that 54%, 30%, want the bank to pay for it, while the remaining 24% expect the business to take the hit.
How Sift fights AI-abetted fraud
Sift utilizes its global network, strengthening each client through the collective experience. Perhaps an IP address or email domain has been tied to a risky transaction elsewhere in the network. That influences the risk scoring for another customer if those addresses and accounts are used again.
“Being able to use AI for fraud prevention will allow you to look at a ton more signals than you can manually look at just as a human,” Allen said. “At the same time, while we’re doing our own testing and learning.”
Sift works best with customers when their analysis reveals patterns that allow them to change their strategy. But it’s a constant process.
“It takes work on the merchant side to anticipate that once you stop fraud in one pattern, the fraudsters will find a way to adjust,” Allen concluded.
Tony is a long-time contributor in the fintech and alt-fi spaces. A two-time LendIt Journalist of the Year nominee and winner in 2018, Tony has written more than 2,000 original articles on the blockchain, peer-to-peer lending, crowdfunding, and emerging technologies over the past seven years. He has hosted panels at LendIt, the CfPA Summit, and DECENT's Unchained, a blockchain exposition in Hong Kong. Email Tony here.