Allison Miller knows a lot about risk. The senior vice president of engineering at Bank of America spent most of her career modelling and mitigating it for companies in the financial and gaming sector. When we interviewed her at SecTor 2017 last November, she was product manager for security and privacy at Google. She spent much of her time there analyzing the many facets of risk to create safer online experiences
Cybersecurity experts spend a lot of time getting inside attackers’ heads, she explains, but to win at cybersecurity we need to be more holistic. We must understand not only the threats, but also how users and systems react to them.
Watch Miller’s interview at SecTor 2017:
Miller outlined three models that organizations should use when exploring risk more holistically to win the cybersecurity battle.
Threat modeling
This model articulates what might go wrong at each step in a process. By understanding what happens at every step in a telephone-based customer enrolment, or in an application such as a payment processing system, threat modellers can predict how attackers might subvert each stage. Armed with that information, they can plan an appropriate defence.
Choice modeling
In many cases, human responses determine whether these attacks succeed. Social engineering is a good example. An inappropriate employee response to a pretexting phone call can divulge valuable information. Similarly, system users making the wrong choices could infect their network.
Understanding how users make choices when interacting with software can help us to influence user decisions from within our interfaces. How we handle software security warnings is a good example, and Miller’s work on Google’s Safe Browsing initiative illustrates what can be done with what she calls ‘opinionated design’.
Behaviour modelling. This is perhaps the most dynamic area of modelling open to cybersecurity experts today. It models how to react to system events as they occur. How do we tell if a payment request is fraudulent in milliseconds? How can we correlate seemingly innocuous behaviour at different points on the network to spot an emergent threat before it does damage?
This is an area where machine learning is gaining traction. Companies are already using machine learning models to spot malicious software behaviour in systems using statistical models rather than simple signature or heuristic scanning. Others are modeling user activities to detect suspicious logins, while some are employing it to analyze fraudulent financial activity.
To properly use AI, companies need a combination of data expertise and domain knowledge. Cybersecurity experts must work with data scientists to identify telltale data points that can help to inform these machine learning models. They must learn to identify examples of good and bad files, transactions and behaviours to support the supervised learning models so common in machine learning training workflows.
Machine learning isn’t a panacea, and it takes extensive data cleansing and modelling to make the algorithms work properly, along with continuous retraining to accommodate shifting threat profiles. Nevertheless, Miller sees promise beyond the hype.
These models complement each other. Our guidance in user choices and our understanding of positive and negative behaviours in systems and processes correlate with the threats that we identify. Used together, these categories of modelling can help us reimagine our view of cybersecurity and what it means to win, she concludes.
Click here to take a deeper dive by watching Allison Miller’s keynote speech at SecTor.
There are 0 comments