The global cost of cybercrime continues to rise during the COVID-19 pandemic, millions of people’s personal data is potentially exposed, and our government and infrastructure are actively being targeted. Not only do ongoing issues such as noisy data play a role in the problem, but so does human motivation, specifically its impact on behavior.

MeriTalk recently sat down with Dr. Margaret Cunningham, Principal Research Scientist for Human Behavior at Forcepoint X-Labs to discuss how to frame these problems in order to design balanced intervention strategies that will improve overall organizational resilience and security.

MeriTalk: Can you tell us a little bit about what Forcepoint X-Labs is, and how it leverages experts like you to understand cybersecurity and how humans interact with technology?

Dr. Cunningham: The exciting part about Forcepoint X-Labs, and why I am thrilled to be a part of it, is that our team includes many different types of experts who bring interdisciplinary perspectives to our mission of understanding human behavior to better protect our customers from security threats. We have people with physics backgrounds, as well as various engineering strategists and developers, working alongside traditional security intelligence folks and behavioral analysts. This allows us to look beyond technical possibilities and incorporate what we know about people. We’re constantly tinkering with how to combine different types of data to reveal the context around behavior so we can better protect our customers and create avenues for identifying different threats more quickly.

MeriTalk: Can you give an overview of your research portfolio and strategy behind human and organizational resilience?

Dr. Cunningham: We are almost exclusively looking at human behavior to create context around how people interact with data and corporate systems. This helps us determine whether their actions are safe or if they pose a security risk. We focus on honing our ability to understand human behavior, both positive and negative, by using continuous analysis of behavior in the context of their roles and past behaviors. We use a systemic or sociotechnical approach for identifying risky behavior, which allows us to identify things in an organization that are impacting user behaviors, and that aren’t necessarily always visible. In a sense, we trace risky behavior back to the source rather than focusing too narrowly on one behavioral data point. Not all risky behavior is malicious, so we look at what types of mistakes people are making, how people are breaking the rules, and how frequently they’re breaking the rules. Perhaps security rules are too restrictive and people work around them to maintain their productivity, unintentionally exposing the organization to more risk. When we look at human behaviors as a whole, it starts to paint a picture of the health of an organization across multiple levels, whether it’s from supervision or management, or through resource allocation.

MeriTalk: As cyberattacks have increased during the pandemic, resilience and human error are even more of a concern. Can you discuss how these sorts of stressors and world events compound the problem of cyber threats?

Dr. Cunningham: I think we can all agree that right now people are dealing with a lot of uncertainty. Speaking from a more comprehensive viewpoint, most people have experienced a huge, sudden shift in their day-to-day life and may be feeling like they’re under pressure all the time. Uncertainty isn’t very comforting for most people. As we are in this heightened emotional state, we tend to make different types of decisions than we typically would under normal or comfortable circumstances. We may be making more decisions that aren’t always for the best, especially when it comes to security. We can think about it in the framework of “hot” decision-making versus “cold” decision-making. Hot decision-making happens when we’re a little bit agitated and more anxious. This can sometimes cause us to do things that we wouldn’t normally do – perhaps skip a boring but necessary safety step in a process, for example.

This is reflected right now by very successful and ongoing social engineering attacks. Attackers are aware of the types of bad decisions people make when they are more agitated due to stress and uncertainty. Hackers are good at manipulating and heightening emotions to make people feel like they’re running out of time or at risk for losing something. Social engineering attacks are more successful right now because we’re already primed towards hot decision-making by our worries, anxieties, and divided attention.

We’re also dealing with the fact that we are working in a completely different setting than the one to which we’re accustomed. Maybe you’re not used to using a VPN. Maybe you’re struggling to get your home computer set up to access things that you need to from your organization. Maybe it’s tough to focus with all the distractions that are in your home. Attackers know this.

Before the pandemic, organizations were more confident in their security strategies because they knew the risk and how they needed to guard against those risks. Most of your employees were probably working on secure, managed devices and connecting to the network from within the literal, physical walls of your building. Then, suddenly, we have this huge new landscape of remote work that IT organizations weren’t quite ready to deal with. We’re all learning a lot right now – which isn’t necessarily bad, but no one is immune to these challenges.

MeriTalk: What’s your advice to help combat these challenges?

Dr. Cunningham: We need to look beyond what we can see, what’s really obvious and visible like the big mistakes – the downtime, the compromised accounts. Those things are important to understand, but that’s not where the problem started. There’s usually some sort of alignment of cascading problems that somehow made its way all the way down to the person who mis-clicked, misconfigured, or exposed something. One of my pet peeves is that people think humans are the weakest link and honestly, we’re not. There’s so much more that contributes to observable errors. We need to set our employees up for success, so they aren’t mis-clicking and exposing data due to poorly designed technical systems. To better protect these new remote workforces, this means we have to provide support, whether it’s offering office stipends, helping people configure their work from home settings, or enabling supervisors and managers to motivate their teams through different types of goal-setting structures or other more human techniques.

What we have to consider is what we can do within our organizations to target human performance. When you don’t have high-performing people, you don’t have a high-performing organization. People actually adapt and make changes fairly well. When we look for it, we can see a lot of good in how people have reacted to these challenging times. But without more deliberate efforts to design and adapt our systems to specifically meet the needs of the humans that use them under trying, novel circumstances, we’re missing a huge step.

MeriTalk: How can agencies establish a strategic plan for identifying individual and systemic risk factors that impact human performance, and design balanced intervention strategies to address identified risks?

Dr. Cunningham: To do this in a meaningful way, you have to start by measuring things. When you start measuring behavior and coming up with things like quarterly reports, KPIs, and performance metrics, you are able to measure and define success. If you don’t have a way to measure these things and how they change over time, you can’t design an intervention strategy. It goes beyond engineering performance metrics. We have to look at the behavior of individuals as a whole, and how different intervention strategies are changing behaviors and outcomes.

MeriTalk: What behavioral attributes can agencies leverage as part of behavioral analytics that help paint a picture of when mistakes may happen in order to prevent them before they occur?

Dr. Cunningham: It’s important to recognize that you don’t have to look at individuals on an absolutely granular level, like, “What did Margaret do on her computer today?” There are broader strokes that we can explore where group (rather than individual) behaviors can be extremely meaningful. An organization might be interested in understanding trends in rule breaking, for instance. How has it changed over time? Maybe you have a very restrictive email security system. If you start noticing that 90% of your organization uses some sort of cloud-sharing or web email to get around your security restrictions, that’s a huge trend in behavior. This should prompt you to look at your security policies, your products, and systems in place and perhaps re-examine whether they are actually working for your organization or whether they are making it too difficult for your people to do their jobs. Once you start seeing these types of trends, you can make changes and see whether the trends change as well.

MeriTalk: How does human-focused security enable a proactive shift towards risk-adaptive solutions versus reactive solutions?

Dr. Cunningham: When you think about traditional security, you can get so much out of it and I am a huge advocate for making sure you have your bases covered with tried-and-true products like firewalls. However, when we only use technology-focused solutions, we are missing out on the rest of the picture when it comes to risk. By using human-focused security, we expand our contextual understanding of what’s happening in our organization and gain the ability to paint a more vivid picture of the risk landscape. This level of context enables organizations to be much more proactive in their response to threats. For example, if you only see a single frame from a film you don’t know the storyline. It might be an interesting picture, but it isn’t very actionable. Human-focused security promotes an understanding of the full storyline instead of the screenshot. It also establishes a better baseline for comparing things and for identifying anomalies. When we are only analyzing device activity or behavior, we only have a partial picture. If there’s a shift, a change, or a problem with how people are behaving, we can find it faster, and we’re much more confident about what we’re seeing.

MeriTalk: Can you speak to why organizations and agencies struggle to adopt more proactive strategies? What are the first steps to take in order to shift from reactive to proactive cybersecurity tactics?

Dr. Cunningham: Every organization has competing priorities and different risk thresholds. One of the reasons why we see more advanced behavioral analytics technology adopted in areas such as finance and government is that their risk thresholds are much lower than other domains. Public sector organizations have already started to shift towards having more digital assets. There’s a lot of knowledge that we care about living online and in the cloud, therefore, people are going to have to transform the way they protect their organizations.

As risk tolerance changes and high-value assets are shifted to different places, we’re going to see a move towards behavioral analytics. The pandemic was a jolt to many organizations, but digital transformation that includes supporting and securing people and data outside of the traditional enterprise perimeter was already the long-term trend. Understanding people and how they behave is a fundamental part of this shift. As we come to value this security approach even more, leveraging specific behaviors that put your organization at risk will start advancing a more human approach to security, driven by exposure and risk tolerance. People are amazing at adapting; technology is far more rigid than people are. We have a lot to gain from better understanding human behavior, especially in moments of great change.

Learn more about how to prioritize behavior analytics in your organization’s cybersecurity strategy.

Read More About
More Topics
MeriTalk Staff