McAfee CTO says human-machine teams will stop cybercrime better

McAfee CTO says human-machine teams will stop cybercrime better 930 708 C-Suite Network

Cybersecurity firm McAfee was born before the current artificial intelligence craze. The company recently spun out of Intel at a $4.2 billion valuation, and it has become a giant among tech security firms. But lots of rival AI startups in cybersecurity (like Deep Instinct, which raised $32 million yesterday) are applying the recent advances in deep learning to the task of keeping companies secure.

Steve Grobman, chief technology officer at McAfee, believes that AI alone isn’t going to stop cybercrime, however. That’s in part because the attackers are human and they’re better at determining outside-the-box ways to penetrate security defenses, even if AI is being used to bolster security. And those attackers can employ AI in offensive attacks.

Grobman believes that including human curation — someone who can take the results of AI analysis and think more strategically about how to spot cyber criminals — is a necessary part of the equation.

“We strongly believe that the future will not see human beings eclipsed by machines,” he said, in a recent blog post. “As long as we have a shortage of human talent in the critical field of cybersecurity, we must rely on technologies such as machine learning to amplify the capabilities of the humans we have.”

The machines are coming, which could be a good thing for security technologists and cybercriminals alike, escalating the years-old cat-and-mouse game in computer security. I interviewed Grobman recently, and the topics we discussed are sure to arise at the Black Hat and Defcon security conferences coming up in Las Vegas.

Here’s an edited transcript of our interview.

Above: Cybersecurity is getting harder.

Image Credit: McAfee

VentureBeat: Your topic is a general one, but it seems interesting. Was there any particular impetus for bringing up this notion of teaming humans and machines?

Steve Grobman: It’s one of our observations that a lot of people in the industry are positioning some of the newer technologies, like AI, as replacing humans. But one of the things we see that’s unique in cybersecurity is, given that there’s a human on the other side as the attacker, strictly using technology is not going to be as effective as using technology along with human intellect.

One thing we’re putting a lot of focus into is looking at how we take advantage of the best capabilities of what technology has to bring, along with things human beings are uniquely qualified to contribute, primarily things related to gaming out the adversary and understanding things they’ve never seen before. We’re putting all of this together into a model that enables the human to scale quite a bit more drastically than simply doing things with a lot of manual effort.

VB: Glancing through the report you sponsored this May — you just mentioned that cybersecurity is unique in a way. It’s usually a human trying to attack you.

Grobman: If you think about other areas that are taking advantage of machine learning or AI, very often they just improve over time. A great example is weather forecasting. As we build better predictive models for hurricane forecasting, they’re going to continue to get better over time. With cybersecurity, as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field we call adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models — what we call poisoning the models, or machine learning poisoning – or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.

There are many ways of doing this. One way we’ve looked at a bit is a technique where they force the defender to recalibrate the model by flooding it with false positives. It’s analogous to, if you have a motion sensor over your garage hooked up to your alarm system — say every day I drove by your garage on a bicycle at 11PM, intentionally setting off the sensor. After about a month of the alarm going off regularly, you’d get frustrated and make it less sensitive, or just turn it off altogether. Then that gives me the opportunity to break in.

It’s the same in cybersecurity. If models are tuned in such a way where a bad actor can create samples or behavior that look like malicious intent, but are actually benign, after the defender deals with enough false positives, they’ll have to recalibrate the model. They can’t continuously deal with the cost of false positives. Those sorts of techniques are what we’re investigating to try to understand what the next wave of attacks will be, as these new forms of defense grow in volume and acceptance.

Executive Briefings: Intersection of Leadership and Social Media

VB: What are some things that are predictable here, as far as how this cat-and-mouse game proceeds?

Grobman: One thing that’s predictable — we’ve seen this happen many times before. Whenever there’s a radical new cybersecurity defense technology, it works well at first, but then as soon as it gains acceptance, the incentive for adversaries to evade it grows. A classic example is with detonation sandboxes, which were a very popular and well-hyped technology just a few years ago. At first there wasn’t enough volume to have bad actors work to evade them, but as soon as they grew in popularity and were widely deployed, attackers started creating their malware to, as we call it, “fingerprint” the environment they’re running in. Essentially, if they were running in one of these detonation sandbox appliances, they would have different behavior than if they were running on the victim’s machine. That drove this whole class of attacks aimed at reducing the effectiveness of this technology.

We see the same thing happening with machine learning and AI. As the field gets more and more acceptance in the defensive part of the cybersecurity landscape, it will create incentives for bad actors to figure out how to evade the new technologies.

Above: Malicious hackers are using AI too.

Image Credit: McAfee

VB: The onset of machine learning and AI has created a lot of new cybersecurity startups. They’re saying they can be more effective at security because they’re using this new technology, and the older companies like McAfee aren’t prepared.

Grobman: That’s one of the misconceptions. McAfee thinks AI and machine learning are extremely powerful. We’re using them across our product lines. If you look at our detection engines, at our attack reconstruction technology, these are all using some of the most advanced machine learning and AI capabilities available in the industry.

The difference between what we’re doing and what some of these other startups are doing is, we’re looking at these models for long-term success. We’re not only looking at their effectiveness. We’re also looking at their resilience to attack. We’re working to choose models that are not only effective, but also resilient to evasion or other countermeasures that will start to play in this field. It’s important that our customers understand that this is a very powerful technology, but understanding the nuance of how to use it for a long-term successful approach is different from simply using what’s effective when the technology is initially introduced.

VB: What is the structure you foresee with humans in the loop here? If you have the AI as a line of defense, do you think of the human as someone sitting at a control panel and watching for things that get past the machine?

Grobman: I’d put it this way. It’s going to be an iterative process, where machine…

Executive Briefings: Intersection of Leadership and Social Media