Recognizing Synthetic Identity Theft Through Identity Verification
By JT Taylor
Two weeks before Christmas 2022, I read the devastating news that I had once again lost my identity. On Dec. 10, 2022, a database containing more than 80,000 FBI InfraGard members’ names, email addresses, Social Security numbers and dates of birth was found for sale on a dark web hacking forum.
InfraGard is a partnership between the FBI and the private sector to protect U.S. critical infrastructure. Before I retired as a special agent with the U.S. Secret Service, I often led InfraGard workshops and educational events on emerging technologies and cyber threats that face public and private sectors.
Regrettably, this is not the first — nor will it likely be the last — time an organization loses my identity. The last time (that I know of) was in 2014, when the U.S. Office of Personnel Management was hacked. This single attack resulted in the theft of 21.5 million Americans’ background investigations. According to the 2022 data breach investigations report from Verizon, there were more than 5,200 confirmed data breaches in 2022. More than 10 data breaches per week means there’s a high probability that you or someone you know may be impacted. There’s an even greater likelihood that you will experience a fraud scheme known as synthetic identity theft.
According to the FBI and the Federal Reserve, synthetic identity theft is the fastest-growing financial crime in the United States. When a fraudster takes legitimate personally identifiable information, such as a SSN, and combines it with stolen or fabricated PII to create a new, yet fake, identity, they are committing synthetic identity theft.
How Synthetic Identity Theft Works
Imagine a fraudster buying my SSN on the dark web and combining it with the fake name “John Doe.” The scammer then adds a prepaid cell phone and secured credit card to the fake identity. The threat actor now has a fictitious name and a real SSN tied to a victim, a phone number, and a financial history. The longer the fraudster “ages” the synthetic identity, the more legitimate it appears. The effects of synthetic identity theft are not often detected nor felt until many years later, making investigations and victim recovery lengthy, complex, and expensive.
Does that scenario sound far-fetched? Consider the case of two South Florida men who created more than 750 synthetic identities. In 2017, they stole the PII of children and incarcerated individuals, created synthetic identities, and opened bank accounts. The fraudsters used the fake identities to create shell companies and lines of credit for them. During the COVID-19 pandemic, these men joined forces with three others and took their scheme to the next level. The result: in 2022, they pleaded guilty to defrauding the Paycheck Protection Program for more than $100 million.
What am I doing about my most recent identity theft? Fortunately, I don’t have to do much — I use ID.me for most of my sensitive transactions. For example, if a scammer were to try to submit a fraudulent tax return to the Internal Revenue Service using a synthetic identity with my PII, ID.me security controls would prevent that person from even creating an account. To steal my identity, the threat actor would not only need to defeat me, but also would have to try to take over my ID.me account.
How ID.me Combats Synthetic Identity Theft
How does ID.me – which was recently added to the Federal Reserve’s list of recognized vendors for synthetic fraud mitigation services – prevent this type of illicit activity? From a high level, ID.me is purpose-built with the best verification technologies maintained and monitored by industry experts. From a human capital perspective, ID.me employs some of the best engineers, data scientists, and fraud investigators to ensure a safe and fortified identity verification process. Together, these elements fuse to build and deliver privacy and security controls that protect people and institutions from being victimized.
ID.me also works to stay at the forefront of emerging identity verification practices and technologies. ID.me employs several approaches including supervised machine learning (ML) and artificial intelligence (AI) to identify fraud and synthetic identity theft. Some examples:
- Training on tagged and labeled data: Supervised machine learning algorithms are trained to spot synthetic identity theft on a dataset of previously fraudulent transactions from a specific authenticator, e.g., cell phone authenticator app. Then it can be used to spot new fraudulent transactions resembling ones the ID.me network has already seen.
- Anomaly detection: Algorithms can be taught to identify patterns of typical behavior and flag any departures as possibly fraudulent, thereby escalating to a human investigation. Many sophisticated financial institutions use this type of machine learning when a sudden increase in spending, a change in location, or a monetary instrument tags a transaction as suspicious.
- Clustering: Based on shared attributes, such as the type of transaction or the location of the fraud, machine learning algorithms can group together similar cases of fraud. Clustering can reveal fraudulent activity trends.
- Natural language processing: Algorithms for machine learning can be used to examine text-based interactions, like emails, texts, or posts on social media, to spot suspect activities. For instance, an algorithm may be trained to recognize linguistic patterns used frequently in phishing scams.
But it’s important to deploy machine learning carefully. While ML and AI assist in the disposition time of a digital transaction, overreliance on algorithms can lead to innocent people being blocked from completing a legitimate activity. This inadvertent blocking can hurt disenfranchised communities and the unbanked, especially as more government services go digital.
ID.me security processes and controls have privacy and equity built into them, ensuring that every algorithm has a human relief valve. People monitor our machine learning tools to make sure they’re working accurately and the way they were intended. Continuous monitoring of our Supervised ML algorithms are accomplished through our trained professionals to make sure they are working accurately and the way they were intended.
Identity verification is not a linear process. An identity isn’t necessarily legitimate just because a data broker has a given name coupled with a given SSN. ID.me security controls look for breadth and depth to determine whether an identity is real or synthetic. When was the last time you changed your email address, IP address, phone number, or driver’s license number? Legacy identity verification providers may use one of these individual variables to verify legitimacy. But ID.me verifies the whole individual, not just a single variable in the formula. Add to this ID.me’s policy of not selling, renting, or trading your personal data, and you have the recipe for a secure digital transaction environment.
JT Taylor is ID.me’s Senior Director of Fraud Investigations and Operations