Why Criminals Like AI for Synthetic Identity Fraud

As generative AI technology becomes more widely available, cybercriminals likely will take advantage of it to enhance their synthetic identity fraud capabilities. Unfortunately, current fraud detection tools likely will not be enough to address the rising threat of generative AI-driven synthetic identity fraud, which could spell financial losses in the coming years, experts say.

Synthetic identity fraud refers to a compilation of stolen or fabricated personal information that is used to create an individual who exists only digitally. This information could include attributes belonging to real people, such as birth dates and Social Security numbers, as well as counterfeit traits like email addresses and phone numbers.

This type of fraud has risen so rapidly that many cybersecurity professionals question how soon technology will be available to address the threat. A Wakefield Research survey of 500 fraud and risk professionals last fall found that 88% of respondents believe AI-generated fraud will worsen before new technology is created to prevent it.

Easy Tech, Lower Barrier to Entry

Cybercriminals have been turning to generative AI to create deepfake videos and voice prints of real people to defraud companies, says Matt Miller, principal of cybersecurity services at KPMG US. The rise of large language models (LLMs) and other similar artificial intelligence technology has made false image generation easier and cheaper for cybercriminals to misuse.

Cybercriminals’ use of generative AI varies based on their level of sophistication, says Ari Jacoby, founder and CEO of Deduce. In the past, bad actors either had to write their own scripts or commission a software developer to write scripts to attack. But with the rise of generative artificial intelligence, cybercriminals can turn to these tools to write a malicious script quickly and cheaply.

A malicious actor can instruct the generative AI application, “Please create an accurate New York driver’s license,” and it will be able to fabricate documents using photos of real people readily available online, Jacoby says, noting that existing defenses intended to prevent counterfeit IDs will “get crushed” by generative AI.

“If you want to use that data that already exists for almost everybody to create a selfie, that’s not hard,” he says. “There’s an enormous group of bad guys, bad folks out there, that are now weaponizing this type of artificial intelligence to accelerate the pace at which they can commit crimes. That’s the low end of the spectrum. Imagine what’s happening on the high end of the spectrum with organized crime and enormous financial resources.”

There also are copycat versions of AI tools like ChatGPT available on the Dark Web, says Nathan Richter, senior partner at Wakefield Research.

Getting Worse Before Better

The Wakefield Research survey data shows organizations are already being affected by the rise in synthetic identity fraud. According to the report, 76% of respondents say they think their organization has customers using synthetic identities who have been approved for an account. The fraud and risk professionals surveyed also estimate that synthetic identity fraud has risen, on average, by 17% over the past 24 months.

Nearly a quarter (23%) of respondents estimate that the average cost of a synthetic fraud incident is between $10,000 and $25,000. Another fifth of respondents estimate synthetic identity fraud incidents cost between $50,000 and $100,000. For financial firms, the cost impact of synthetic identity fraud could be high.

Many cybersecurity professionals see the problem of synthetic identity fraud becoming worse before it gets better. The Deloitte Center for Financial Services predicts that synthetic identity fraud could lead to $23 billion in losses by 2030.

The openness among survey respondents to discuss the issue suggests that synthetic identity fraud is becoming more pervasive, Richter says.

“Typically, when you do research amongst highly trained professional audiences, there’s a certain amount of professional pride that makes it difficult to admit any kind of fault or problem,” Richter says. “We don’t have that problem here for the most part. We have respondents that are readily admitting this is an enormous issue. It’s resulting in significant losses per incident, and it’s expected to get worse before it gets better. I can tell you, as a researcher, that is extremely rare.”

Fighting Fraud With Cyber

Tackling this problem requires companies to adopt a multilayered approach, says Mark Nicholson, principal of cyber and strategic risk at Deloitte. Part of the solution entails using artificial intelligence and behavioral analytics to distinguish between real customers and fraudsters.

Beyond verifying a customer’s identity at a particular point in time, companies, particularly in financial services, need to understand customers’ behaviors over a longer period and continue to authenticate them during those interactions, Nicholson says. In addition to behavioral analytics, companies are weighing other options, such as harnessing biometric data, third-party data, fraud data sources, risk assessors, and session monitoring tools.

“Just as we contend with zero-days and we patch applications, we’re going to have to understand how generative AI is being used on a continuous basis and adapting as quickly as we can in response,” Nicholson says. “There’s no silver bullet, I don’t think. And it’s going to take a concerted effort by everyone involved.”

Besides their cybersecurity tools, companies also must evaluate the human risk factors that have emerged with the rise of generative AI and synthetic identity fraud and begin training employees to spot those risks, Miller says. Companies must understand where their processes are susceptible to human error.

“Can your leadership call up your treasury department and move money just with a voice phone call? If your CEO was deepfaked or your CFO was deepfaked, could that result in financial loss?” Miller says. “Look at some of those process controls and put counterbalances in place where necessary.”

The Biden administration’s executive order introducing new standards for AI safety and security is a good first step, but more regulation is needed to safeguard the public. Though tech companies are lobbying for self-regulation, that may not be enough to address the rising threat of artificial intelligence, Jacoby says, adding that self-governance has not been beneficial for consumers in the past.

“I don’t think that the talking heads on Capitol Hill understand all of the ramifications, nor should we expect them to in the early innings of this game.” Jacoby says. “It’s very difficult to regulate these things.”

In addition to regulatory and policy controls, Miller says he foresees technological controls being implemented so that artificial intelligence can be used in a manner that stakeholders agree is appropriate. However, while those restrictions are being worked out, companies must remain diligent, because digital adversaries are able to build their own models and infrastructure to execute fraud.

Ultimately, artificial intelligence companies will have to play a role in mitigating the risks associated with the technology that they’ve created.

“It’s incumbent upon the institutions that are providing this technology to not only understand them, but to really understand the risks associated with them and be able to educate on the proper use and also be able to control their own platforms,” Miller says. “We always talked about it in cyber historically as spy versus spy, but in many cases we’re now seeing AI versus AI.”

Source: Original Post


“An interesting youtube video that may be related to the article above”