As new security technologies shield us from cybercrime, a slew of adversarial technologies match them, step for step. The latest such advance is the rise of digital doppelgängers—virtual entities that mimic real user behaviors authentic enough to fool advanced anti-fraud algorithms.
In February, Kaspersky Lab’s fraud-detection teams busted a darknet marketplace called Genesis that was selling digital identities starting from US $5 and going up to US $200. The price depended on the value of the purchased profile—for example, a digital mask that included a full user profile with bank login information would cost more than just a browser fingerprint.
The masks purchased at Genesis could be used through a browser and proxy connection to mimic a real user’s activity. Coupled with stolen (legitimate) user accounts, the attacker was then free to make new, trusted transactions in the user’s name—including with credit cards.
“We see a clear trend of carding fraud increasing around the world,” says Sergey Lozhkin, a senior security researcher with Kaspersky Lab’s Global Research & Analysis Team, “[but] while the industry invests heavily in anti-fraud measures, digital doppelgängers are hard to catch.”
Digital masks are unique combinations of a user’s device fingerprint (a mix of device ID, hardware, operating system, IP address, screen resolution, firmware versions, browser, browser plug-ins, time zone, GPU information, WebRTC IPs, TCP/IP fingerprint, cookies, and much more) and their personal behavioral attributes (time spent at specific online stores, interest-related behavior, mouse/touch screen behavior, etc.). The unique complexity of each user plays a key role in cybersecurity today, with companies relying on machine learning–based algorithms to weed out fraudulent transactions.
Imran Malek, formerly a software product manager at a machine learning–based advertising technology company, explains these defenses as “another layer of security that can be applied to applications and platforms to help preserve their security.” The earlier rules-based security paradigm, he explains, would permit or restrict access depending on whether particular conditions were met. “Thanks to machine learning, you can now implement paradigms that rely less on rigid rules and more on probabilities [which] are then reinforced or adjusted based on new data that gets into the system. More importantly, you can use lots of different data points and you can make a decision based [on] the specific combinations of data points.”
The doppelgängers on Genesis mimicked authentic digital masks, thereby co-opting cybersecurity techniques to get past fraud-detection protocols. “If there’s machine learning on one side, there’s going to be machine learning on the other,” Malek says. “Now, hackers and bad actors are implementing their own as an exercise of adversarial machine learning.”
It is now common practice for hackers to find and copy digital fingerprints, along with other personally identifying information, from machines they target with malware. “Hackers at this point have been able to collect, curate, and maintain a lot of data of users they’ve compromised,” says Malek. “Meaning, if your laptop gets hacked, the prize isn’t necessarily the hacked machine—it is all of the data on your machine, including all of the identifying information.”
The Genesis developers made it really easy to deploy a digital doppelgänger through their CRX plug-in for Chromium-based browsers. This allowed one to install a stolen profile with a single click and become indistinguishable from the legitimate user. Then it was just a matter of connecting to a proxy server with an IP address from the victim’s location to bypass any verification mechanisms. Genesis also provided algorithms to be used with the plug-in to generate random, unique fingerprints that would not trigger any alarms.
The team at Kaspersky were surprised at the scale on which Genesis operated, though it was hardly the first or only instance of digital identities being traded on the darknet. An earlier report from Kaspersky Lab, for instance, revealed that stolen login profiles were being sold for as little as $1 per account, with dealers offering lifetime warranties and bulk discounts.
Privacy browsers like Tenebris’s Sphere can also be used to create digital identities indistinguishable from real ones—though the original intent was to use this feature to throw off digital surveillance technologies. Sphere is a fully functional browser that comes with an advanced fingerprint configuration utility and an activity emulator that hackers can program to open the desired websites, follow links, stay on websites, and so on—all activities that can trick the behavior-analysis modules of fraud detection systems. Tenebris operates on a subscription-based model, US $100 for a one-month subscription of the browser with advanced configuration utilities, or US $500 with access to the Genesis database thrown in.
When it comes to preventing these kind of attacks, cybersecurity specialists run up against the expectation that anti-fraud measures remain invisible to users and never hinder a user’s experience. “A false positive is absolutely unacceptable,” says Malek, because it would lock a real user out of their own account. “Accordingly, these systems have to be really careful about what they identify as human versus bot traffic, and the bot makers are aware of this.”
Payal Dhar (she/they) is a freelance journalist on science, technology, and society. They write about AI, cybersecurity, surveillance, space, online communities, games, and any shiny new technology that catches their eye. You can find and DM Payal on Twitter (@payaldhar).