The cybersecurity industry is on fire. As attackers innovate, defenders are relying on new technologies to given them an advantage. What technologies are most promising in the battle to protect our networks and data? SecTor asked several of its experts.
People were most fired up about artificial intelligence and its subset, machine learning. Machine learning uses algorithms that enable computers to learn from data without being explicitly programmed to follow specific rules. This makes them more adaptable to data inputs over time. It also makes them more consistent and often more accurate than human operators.
Machine learning garners attention
We are seeing AI take over mundane tasks in many areas, from interpreting commercial loan agreements through to sorting cucumbers. Now, it shows promise for cybersecurity practitioners, too.
SecTor’s experts identified several areas where AI will be important in the next few years. Dave Millier, CEO of Uzado, highlights it as excellent at assisting with incident response.
AI can help to mine and synthesize large amounts of data to help find anomalies, he says, making it a good complement for existing technology such as security incident and event management (SIEM) systems.
Finding a needle in a haystack is a definite use case for AI, which is good at automating mundane tasks. There comes a point at which simply throwing more low-level analysts into a security operations centre provides diminishing returns.
Automating the task of combing through thousands of events looking for potential red flags can help to free up human skills for more detailed analysis. That human component is something that Millier still sees as important.
People can still understand context and make informed decisions, but AI can help them to do something that Toni Gidwani, director of research operations at ThreatConnect, says is crucial: separating the noise from the signal.
Some hope that AI will take us beyond simple event sifting. Iain Paterson, managing director of Cycura, envisions an AI-enabled future in which machine learning, automation and orchestration combined to make network defences more automatic.
We are already seeing this in some areas such as identity and access management, where systems use machine learning algorithms to make automatic decisions about who gets access to what, based on parameters such as role, location and login time. In other use cases, organizations are employing machine learning algorithms to detect suspicious network behaviour. CERN is a good example.
It’s probably good that some organizations are already using AI for network defence, because the technology doesn’t just present an opportunity for white hats. Black hats also stand to gain, and could already be innovating with it.
Researchers have already shown AI algorithms to be better at writing successful phishing tweets, while DARPA’s 2016 Grand Challenge pitted AI algorithms against each other, fixing security holes in their own machines while exploiting others’.
Keynote speaker Bruce Schneier pointed out that defenders can’t wait around for this technology. If malicious actors are likely to increase the rate and severity of their attacks with AI, then we must get involved with the technology, or get left behind.
This reminds us of the US Manhattan Project to develop the atom bomb. It began because Einstein was worried that if we didn’t harvest atomic power and weaponize it, someone else would do it first. That presented the US with an existential problem. Logically, it had no choice but to pursue it.
The results of the DARPA Grand Challenge were promising but showed that automated AI-based attack and defence both need work. Defence bots made mistakes, in some cases breaking their own systems as they tried to patch them.
We are still in the early stages, though, and initial developments in cybersecurity AI can quickly mature. DARPA also ran a Grand Challenge in 2004 in which it invited researchers to test self-driving cars. Most of them crashed, failed, or caught fire. Look at how far we’ve come, and then think about how AI’s initial forays into automated attack and defence could evolve.
Back to basics
AI dominated discussions of exciting cybersecurity technologies among SecTor’s experts, but not all of them were focused on it. SecTor co-founder Bruce Cowper pointed to another, less sexy but immediately useful technology: multifactor authentication.
Systems using MFA require more than just your password to protect your accounts. They might ask for a code sent via a text message to your phone (or, more securely, a mobile authenticator app). They might use a hardware token, a digital certificate, a biometric signal, or a combination of all these, depending on the sensitivity of the information involved.
MFA has been around conceptually for years but has only gained traction recently. This is partly because the devices used for MFA have become more functional and available, and partly because account hijacking has become such a problem. Smart phones are now smart and functional enough to be part of an MFA workflow, and present a relatively easy way to shore up an inherently insecure password-only mechanism.
All of this goes to show that cybersecurity technology doesn’t have to be sexy to be successful. Sometimes, the most useful techniques are those that the industry has known about for years but hasn’t been ready for.
There are 0 comments