Amidst cybersecurity talent shortage, AI is filling the void
Experts and attackers both rely on AI
Ann Johnson has her work cut out for her. As Corporate Vice President, Cybersecurity Solutions Group, for Microsoft, she needs to keep on top of the rapidly changing landscape of cybersecurity threats globally.
There's a chronic talent shortage in cybersecurity, with estimates that globally there will be three million vacant cybersecurity jobs by 2021. But even without that shortfall, there are more attacks than humans can keep on top of. Johnson told Spark host Nora Young that Microsoft sees about six and a half trillion threat signals globally, every day.
Increasingly, cybersecurity relies on artificial intelligence to help spot and combat that threat.
Johnson was in Toronto recently, where she met with Young. Here is part of their conversation.
What role does A.I. play in cybersecurity?
We use a machine learning engine to distill those [trillions of] attacks, to separate the noise from the signal, and what we really need to investigate. Then we use artificial intelligence to actually model them.
So, let's say machine learning comes out with a hundred things that seem relevant. Artificial intelligence will take those hundred things and say: this particular attack is going to actually impact a thousand servers in your data center. Whereas [another] attack will impact two of your own users. So we want you to prioritize the one that'll attack a thousand servers in your data center first. And it can do that work in milliseconds. We're not waiting minutes or hours.
So when we look at using machine learning for this kind of thing, is it based on essentially training the systems on large amounts of data about how malware works and what to look for, and then it's doing essentially pattern recognition?
That's a good way to describe it. It's actually taking data and not just from malware but from your computer (it's all anonymous data). It's taking behavioural data from the applications you're running on your computer. Your Microsoft Word application is going to behave in a way that's known to the network. If it starts to behave in an abnormal way, the machine learning engine is going to suspect there's something wrong with the software itself. If the user starts to behave in a way we don't expect the user to behave. If the computer starts to behave in a way we don't expect the computer to behave. So we look at all of that, and then we're able to quickly detect and block. We can stop the user, we can stop the application. And once we block… there's no other entry point for that computer if we determine it really is malicious.
So if you folks are using machine learning does that mean that the 'black hats' are using machine learning as well?
Of course they are. Cybercrime is a multi-trillion dollar industry. They're very well-funded. They also can self-fund. If you think about a large wholesale breach that's been in the news, where you see millions of credentials being stolen. The bad actors take what they want and then they sell the rest for monetary gain. So they have the funding to actually get whatever advanced technology they want to deploy.
Is part of the concern that the 'bad guys' are getting better at disguising what they do, so it's actually harder to spot those patterns? Presumably they know that your systems are being trained on piles of data of what malware actually looks like. Are they getting better at disguising it?
They are. We do what's called reverse engineering. So we actually look at the malicious code during an attack and try to understand what it was going to do. And we did that for one of the very large attacks. The first thing my team thought was: 'oh this malware is adapting in the wild (meaning after it's already out there) because depending on what antivirus agent it saw, it behaved differently. So we were very worried about it. When we went deeper, we realized it had been coded to do that. So they knew what tools people might have as an antivirus agent and it coded for the most popular three. They're getting more sophisticated.
There are a couple of things we can do. Keep training our machine learning models, which we do. The other thing is we need to get users to stop using passwords. The wholesale theft of passwords is one of the worst problems the industry has right now. The way to solve that is you just get users to stop using passwords. They can use any type of biometric authentication or phone-based authentication that we're rolling out. A lot of our enterprise customers, or corporations, are going all-in on being password-less. We need to get that to the consumer.
This interview has been edited for length and clarity. Click the listen button above to hear the full conversation