Criminal artificial intelligence may be coming for your money: Don Pittis

We worried about artificial intelligence taking over the world. What about it emptying your bank account?

'Deep fake' rip-off of British energy company sounds a warning bell

A British energy company was recently duped out of 220,000 Euros by a 'deep fake' computer-generated phone call posing as one of the firm's top executives. (Shutterstock)

If your boss phoned and asked you to do something, would you do it? After reading this, maybe you'll think twice.

When the first reports came in about a theft committed using a "deep fake" phone call — that ordered an employee to send a quarter of a million dollars to a secret account in Hungary — it was easy to scoff. 

But now multiple reports have confirmed the story with the British energy company's French insurer, Euler Hermes.

According to Rudiger Kirsch, an expert in fraud at the credit insurance company, the CEO of the company's British subsidiary clearly recognized the slight German accent and inflection of his boss, who asked him to transfer the 220,000 Euros.

"The caller said the request was urgent, directing the executive to pay within an hour, according to the company's insurance firm," reported the Wall Street Journal.

The fact that the British exec was willing to act on the call is evidence of the quality of the fake and an example of how ill-prepared we are in the face of unfamiliar new technology. Until now, the voice of a friend or acquaintance was enough to establish their bona fides. No more.

AI accomplice

So far, the corporate victims of the scam have not been named and, while police are investigating, no suspects have been identified. The fact that the perpetrators knew the names of and relationships between the two executives, had a good sample of the senior person's voice and a idea of where he credibly might want money remains suspicious.

But whoever the crooks are, it seems artificial intelligence was an accomplice; the kind of AI that mimics famous people and makes them seem to say words they never uttered; sometimes with video to match. 

So far, deep fakes have been used as a kind of internet party trick. But there have been warnings they could be used for villainy.

Now that we know how convincing they can be, Canadians may be more wary — rightly or not — of politicians who seem to say outrageous things. Would a real person call teenage climate activist Greta Thunberg mentally unstable? 

According to most experts this is the first time voice imitation AI has been used to commit a crime. No one seems quite sure if AI of other kinds has been used for criminal activity. It may be hard to tell.

"The fact that these tools are so easy to get hold of and so easy to, sort of, craft to your own purposes means, I suspect, we're going to see a lot more of this stuff," said University of Regina computer science professor David Gerhard.

When the late physicist Stephen Hawking warned about the menace of artificial intelligence, he was talking about the danger that machine intelligence could wipe people from the face of the earth.

Under the science fiction scenario posed by Arnold Schwarzenegger's Terminator movies, an artificial intelligence called Skynet takes over the world's missiles and battles humanity for control of the planet.

While that would be bad, the warning of this deep fake fraud is that the more immediate danger of AI may be far more mundane. 

It is well known that AI is already widely used as a security tool. Could AI be used to thwart it? As Canadian artificial intelligence pioneer Jonathan Schaeffer told me four years ago, banks were already commonly using AI to screen your credit card use, disallowing, say, an expensive watch in Vladivostok on a card you normally use for Winnipeg car rentals.

Gerard says, nowadays, AI remains a constant in our daily lives, with bots buying and selling stocks, and a complex machine-learning program deciding which ads you will see when you use a Google search.

So far, bank accounts and other automated security systems appear relatively resistant to AI penetration.

"Even if bad guys throw artificial intelligence at bank software, the software's not going to break. What's going to break first is the people, which is what this case is about," said Gerhard.

"It's not people trying to use software to break the encryption on the bank vault, it's trying to use artificial intelligence to trick a person who has the credentials to get into the bank vault."

Apparently experts are thinking of ways to use counter-AI to detect fake voices, but the real solution may require relatively simple human caution, such as a call back to confirm.

And when the crooks are caught, it will likely be due to human error.

It seems the fraud was finally exposed when, on a subsequent call by the crooks, the executive the AI was imitating was already speaking on the other line.

Follow Don on Twitter @don_pittis

About the Author

Don Pittis

Business columnist

Don Pittis was a forest firefighter, and a ranger in Canada's High Arctic islands. After moving into journalism, he was principal business reporter for Radio Television Hong Kong before the handover to China. He has produced and reported for the CBC in Saskatchewan and Toronto and the BBC in London. He is currently senior producer at CBC's business unit.


To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.