In the first half of 2022, fraudulent financial transfers reached an estimated high of $3 billion.
Unfortunately, the second half of 2022 doesn’t seem to be going any better.
With the FBI warning the public against tech support scammers, and Elizabeth Warren criticizing Wells Fargo for rampant online banking fraud. Even more sinister were reports of money mules in America being used to launder illegitimate funds gained through sextortion, a crime that has led to more than a few suicides.
Although the fetishized image of the black hat hacker that uses terms like “I am in” and other technical jargon, is overblown -- this movie trope is true insofar that cyber criminals and cyber security specialists have a deeply adversarial relationship. Due to digitalization, now more than ever this relationship plays out over and through technology.
As digital security becomes more robust, the cat and mouse game ensues. Black hat hackers and bad actors start searching for a new exploit or a possible loophole in the newly updated security and cyber security professionals rush to patch it up.
One factor that can play to a cyber criminal’s advantage is the human element. No matter how well FIs stack their security tools, those looking to access illegitimate funds can focus on people rather than systems. Relying on a mix of coercion, misleading information, tech jargon, fake websites and limited digital expertise to achieve their ends.
Enter Biocatch, a company that leverages behavioral biometrics to identify fraud and cyber-criminal activity, purely through analyzing how consumers behave when using their banking and finance apps. Rather than keeping a snapshot of your face or retina, the company forms a similarly detailed profile of consumers by analyzing interaction patterns and behaviors like swipes, taps, and activity periods.
The pitch is straightforward and powerful: no Personally Identifiable Information (PII) needs to be shared and the firm can perform its analysis purely based on how a particular consumer interacts with their device.
I spoke with the company’s Director of Fraud Strategy Raj Das Gupta to discover how potent the claim of behavioral biometrics really is.
How does Biocatch work?
Raj Dasgupta: We look for behaviors that users show online, when they're applying for a new account on the web, or while accessing their online banking account, what they do during those interactions, how they tap, how they move within different pages of online banking, how they are holding their device.
From any session we can collect about 2000 different attributes of behavior. Then we feed all of that into our predictive models, which then spits out a score and several attributes to give us a sense of the risk associated with that session.
It could be a difference in user behavior. They log in once a week and they very rarely set up a payment, and etc. These kinds of things give us opportunities to observe user behavior and build a profile for them.
For example, I may be left-handed and use swipes. That tells Biocatch that this is a left-handed person. If there is a subsequent right-handed session,that will raise Biocatch’s risk assessment. That's how we look for anomalous behaviors compared to the established profile of the user.
Are user profiles static?
Raj Dasgupta: For the most part, they'll stay static.
However, every now and then, we’ll see two different profiles from the same login, for example when a spouse is sharing their account or somebody breaks their hand so they stop using their dominant one. Then we'll see that the profile has changed a little bit.
But there is nothing else that suggests it's fraudulent- hence it's safe. So, every once in a while the profiles change.
How does Biocatch detect scams where victims are being guided on the phone?
Raj Dasgupta: This is something that I hold dear to my heart, because I've had close family members who have been victims of such scams.
What will stand out is the user’s behavior at the time because although it's the genuine user who's logged in, they're not quite acting like themselves.
They'll show signs of duress, they'll show signs of hesitation. We can even pick up an active call on their phone.
We will notice when they’re raising the mobile phone to their ear to listen to the fraudster’s instructions, and then putting their phone down on the back of their palm to carry out a transaction. Moreover, users are nervous, they don’t tap, like they usually do. They take longer to press the button.
Those are little telltale signs that something unusual is going on. Excessive mouse wiggling, on top of that, the indication of active call is very predictive of a scam.
We look at all of these things together and assess that even though it's the genuine user who's logged in, they're not behaving like themselves, and it looks like a scam going on in real time.
How does the company know what data to collect? What is the provenance of this technology?
Raj Dasgupta: The answer to this comes from the company’s genesis story:
Formed in 2011 our founders were former employees of the Israeli defense forces where they would use online user behavior to detect terrorist activities. The founders then thought that they could leverage the same kind of technology to detect fraud use cases, namely new account opening fraud, account takeover, fraud, voice scans, etc. That's where it all started.
How do you handle disclosure?
Raj Dasgupta: The disclosure is very dependent on the local applicable regulation. Like for example, in the United States, we are not required to disclose because we're using it as a means of prompt detection as a layer of defense against attacks.
In certain jurisdictions, IP addresses are considered PII (personally identifiable information) for example, in Europe under the purview of GDPR. We are fully compliant with all GDPR requirements. But in general, we never collect any kind of personally identifiable information.