After years of theorizing about “proactive cyber defense,” talk is finally starting to be put into action. Let’s not delay. As cyberattacks become frighteningly more commonplace (the Equifax hack of 143 million consumer IDs is just the latest example), we have to stop taking it on the chin from bad actors who find cyber intrusion and electronic warfare relatively simple and free from consequence.
The public conversation about cybersecurity has to change. We can no longer afford to reactively patch vulnerabilities or fix leaks. We have to figure out what’s going on, and how to use deterrents to put up the kind of complex obstacles that make cybercrimes less attractive.
We can learn more about bad actors by using the same kinds of visibility techniques many organizations are starting to use to look at their own IT networks. Why not use these same tools to get a better handle on how cybercriminals work?
AI, the human factor and why trust is important
At a recent forum for government IT professionals, Artificial Intelligence (AI) was being touted as the go-to next technology for understanding potential threats in nearly every theater of war, from cybercrime to electronic warfare.
Ardisson Lyons of the Defense Intelligence Agency talked about the Intelligence Community Information Technology Enterprise (ICITE) – a single, standards-based IT architecture across intelligence agencies that emphasizes a move to store user data from all agencies in the cloud. Using standardized cloud-based platforms can improve big data analysis and consumption, Lyons said, while an “Intelligent Simulation Center” can help immerse decision-makers in the information in a dynamic way.
Jason Matheny of the DOD’s Intelligence Advanced Research Projects Activity emphasized that AI allows us to get inside the adversary’s decision-making. With that capability, we can better understand real and potential threats, see how the adversary reacts and then provide courses of actions based on data.
At the Intelligence Advanced Research Projects Activity (IARPA) of the Office of the Director of National Intelligence, a program called CAUSE (Cyber-attack Automated Unconventional Sensor Environment) tries to anticipate cyber attacks, before even indicators happen, according to IARPA’s Deputy Director Stacey Dixon.
The goal with all of this AI technology is deterrence, but deterrence is tricky when it comes to cyber warfare. Bad actors feel as though they can get away with things and hide in ways that are not possible in the physical world.
One key to effective deterrence is to become more vocal about attribution. Political policy decisions that are too conservative about public attribution can have a negative effect on deterrence.
Underlying all of this technological information, therefore, is the need for better trust and information sharing among experts in government agencies. As DIA’s Lyons noted, there is a clear value to having human analysts involved in the AI process – understanding why things are happening requires depth and craft that’s missing from technology.
For a proactive defense to work, we can’t have organizations sit back and just let people do their thing. We must improve trust among agencies, across government and with non-government parties to share information and use their best minds to analyze information from all sources. We cannot hope to win against enemies with global reach if our solutions are isolated from one another.
Stop fighting after the fact
To succeed in the fight against cybercrime, our solutions must be proactive, not reactive. After-the-fact approaches to cybersecurity are hard to implement and expensive. Unlike the fabled Little Dutch Boy plugging holes in dikes with his fingers, we will soon run out of fingers and further reaction will become practically impossible. Proactive defense is key to managing risk to operations and making cyberattacks more costly for attackers.
In the end, it’s about deterrents, and I firmly believe in the value of deterrents. A relentless enemy must be faced with a proactive defense that introduces deterrents to make the enemy think twice about what they are doing.
Without deterrents, bad actors will do whatever they want. If you introduce complexity to their fight with more deterrents, you won’t stop attacks altogether, but you will be able to slow them down. And in the process, you’ll learn more about the threats that are out there, waiting for the opportunity to be used against unsuspecting communities.