The constant barrage of frightening cyber news has been inescapable: the SolarWinds supply chain attack, the Hafnium Group targeting Exchange vulnerabilities and now a new wave of ransomware exploiting thousands of newly minted hacker back doors. These attacks have already impacted tens of thousands of enterprises and government agencies. They're also forcing security experts to rethink some basic premises and question whether they should continue to pour money into conventional security technology to fight yesterday’s cyber battles.
While the future of cyber threats is always uncertain, what is predictable is unpredictability. Attackers from nation-states to script kiddies have become endlessly innovative, constantly creating malware variants. They do this to evade security systems that require prior knowledge to protect. Whether they use old school signatures and heuristics, global cloud analytics or futuristic-sounding AI and ML, most of today’s security tools are helpless against unknown attacks. These same tools failed to detect SolarWinds for over 15 months – an eternity of hacker dwell time.
(Re)enter zero trust
The post-SolarWinds angst has caused a revival of a powerful security concept: zero trust. Imagine if we could stop chasing threats, and instead ensure that people, devices, networks and applications only did the right thing, regardless of vulnerabilities, malware or zero-day threats. Experts from the NSA, NIST and even Google have all published new recommendations promoting zero trust security.
While it’s a great idea, in practice zero trust has been difficult to achieve and maintain by manual means. But before we throw the baby out with the (untrusted) bathwater, let’s review some of the basic principles of zero trust:
1. Never trust – always verify
Zero trust seems like a bit of an oxymoron. Business requires establishing trust, and if you can’t trust anything, you should probably close the doors. In fact, an underlying purpose of security is to enable trust by reducing risks.
Maybe a more apt term is the typical approach that parents take with teenagers – “trust but verify.” In fact, NIST (SP 800-207) uses a very similar definition: “Zero trust security is based on the premise that trust is never granted implicitly but must be continually evaluated.”
2. The attackers are already inside
While a bit alarming, zero trust tells us we can’t assume that our networks are clean, or that we can reliably keep the bad guys out. New guidelines from the NSA state that zero trust “assumes that a breach is inevitable or has likely already occurred.”
We also must accept that perimeters are disappearing, porous and perimeter-based security will inevitably be bypassed. In fact, Google states this bluntly: “You should reject the perimeter model and embrace a philosophy of zero trust.”
Don’t just chase ‘bad’ – ensure ‘good’
Most security tools focus on identifying and stopping the ‘bad stuff.’ This has led to a never-ending saga of threat chasing, creating signatures of known malware and trying to react when the next variant strikes. As SolarWinds demonstrated, we’re not catching up and security that requires prior knowledge will always be too little, too late.
Shifting to a positive security model inherently makes sense. Rather than trying to stop everything ‘bad,’ zero trust focuses on making sure that code and applications only do the right thing, and anything out of the ordinary should be detected and stopped.
Zero trust must go deep and be automated
Many people have a limited view of where zero trust applies, focusing solely on who can access what resources on which devices. But recent attacks have demonstrated that the security battleground has moved into applications and is being fought in runtime – when code is executing.
Gartner, in its Market Guide for Cloud Workload Protection Platforms recognizes this gap and recommends that we should “at runtime, replace antivirus-centric strategies with “zero-trust execution.””
Any application workload has many moving parts – hundreds of files, thousands of processes and millions of memory calls that define the correct execution of application code. To make zero trust practical it must be automated.
Newer solutions focus on tackling this problem by automatically mapping acceptable application execution across files, scripts, directories, libraries, inputs, processes and even memory usage. Armed with a map of what is supposed to happen, these systems can monitor applications in runtime and instantly spot any deviations, which are clear signs of attack and be addressed with automatic blocking actions.
‘We’ve never seen this before’ is shouldn’t be an excuse
At recent Senate hearings on the SolarWinds attack, a common refrain was “we’ve never seen this before,” implying that security experts should be excused for being slow to react. However, this mentality is an indictment of most of our current security technology, and a key reason why these attacks are succeeding. To move beyond endless, reactive threat chasing, we need to move to a proactive, positive security model. If we can apply zero trust successfully, we have a chance of changing our current losing security equation.