Saturday, July 13, 2024

Cybersecurity Best Practice: Guilty Until Proven Innocent

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Perhaps guilty until proven innocent isn’t so bad an idea after all.

It’s often been said the “lawlessness” of the Internet is similar to the American “Wild West.” I have always cringed when hearing that, because it’s just too much of a stretch for me, but there’s at least one aspect of it that is worthy of consideration when it comes to securing our data systems. In this case, guilty until proven innocent may actually have some merit after all.

No, I’m not talking about applying this notion to humans. Most of you reading this live in places where the opposite is applied in our court systems: innocent until proven guilty. And, it’s likely we all value that principle very highly for all the right reasons. But that’s not what I’m talking about.

I’m talking about applying the opposite principle to shoring up our information security practices. We already see it applied in some places, such as setting firewall rules and such, but we need to take it further.

In government circles, the term information assurance is used often. We’ve also heard terms like trustworthy computing, reliable systems, and others. For these to be successful, they must all have at their core a common principle of guilty until proven innocent. I’ll explain with a few examples as well as counter-examples.

• Firewall rules.
As I said above, this is a great example of applying the guilty until proven innocent principle. Most firewalls today are configured to only allow authorized services, and to disallow all others. I’m sure this is mostly a result of the security community rallying during the 1990s to enforce this sort of positive validation model, sometimes to the dismay of the application developers who wanted to run their software over network services that were being blocked. Unfortunately, we haven’t been consistent at applying the same principle in enough other areas.

• Input validation.
The cornerstone to secure software, input validation, when done poorly is the root cause of many of today’s most prevalent security defects. This includes buffer overflows, cross-site scripting (“XSS”), and injection flaws such as SQL injection. Input validation is most often the purview of the application itself, and not (just) the firewall protecting it. For input validation to be as robust as our firewall rules, we have to code the guilty until proven innocent principle into our applications, and that’s where we all too often fail. That is, our software must use positive validation on all of its inputs, throwing away anything that can’t be validated on the presumption of guilt. That, I’ve found, is a concept few software developers take to easily, but it is vital. We’re seeing Web application firewalls (“WAFs”) trying to ease this burden by doing the application-layer input validation in a separate security component (installed and configured by the security team), but that’s not a long-term solution for all sorts of good reasons.

• Anti-virus scanners.
Here’s a classic failure of the principle. From the earliest days of the anti-virus scanner, a negative validation model has been used. That is, scanned software is assumed to be safe unless it matches a signature of a known bad thing. We’re living with this failure every day in combating email-borne malware, viruses, etc. A fairly recent trend has been the so-called “whitelisting” of presumably safe applications, but that too is not a long-term solution.

• Application vulnerability scanners.
Much like their anti-virus counterparts, vulnerability scanners, including application scanners, use a negative model to detect bad things on our systems and in our applications. Although this approach no doubt finds plenty of “low hanging fruit,” it is superficial and insufficient in the long term.

• Intrusion detection/prevention systems.
Same old same old here. The vast majority of deployed IDS and IPS installations operate on a negative model that is doomed to failure and at the worst possible moments—when novel attacks are discovered and launched on our unprepared systems.

So where do we go from here? Well, let’s revisit our principle: guilty until proven innocent. It is a design principle that must be considered against every single aspect of our data processing systems in order to be effective. That means we can’t stop at just the firewall.

In my Web application security classes, I always tell my students they must never trust anything coming in an HTTP Request (that is, from the user). On day 1, they always giggle or just kind of scratch their heads in confusion when I say this, but by the time they’ve seen the likes of cross-site scripting in action, they begin to believe. By the time they leave the class, the number one comment I get from them is how “eye opening” this notion was for them.

We’ve got to inculcate a culture of guilty until proven innocent in our digital world throughout our organizations, from our information security staff through our software developers. Only then will our software—whether in the form of operating system kernel code or web applications—be resilient and worthy of trusting our businesses to.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles