Software Bugs: To Disclose or Not to Disclose

Columnist Ken van Wyk takes on the question of whether to disclose software defects. What's best for the industry? Who gets hurt?
It's the age-old battle of security: to disclose or not to disclose software defects.

The proverbial pendulum of opinion has been swinging back and forth on this issue for decades, and it's not likely to stop any time soon. The issue reappeared just recently when an ISS employee was prohibited from speaking at a conference on the topic of a security vulnerability in Cisco's IOS operating system.

Here's my take on it...

First off, it's not a simple yes or no issue. There are different shades of gray here, folks. At the two ends of the extreme we have no disclosure and 'spontaneous disclosure'. Neither of these are even worthy of serious consideration in any practical sense, since neither produce any sort of positive results.

The litmus test of positive results that I've always used over the years is this: Does publicizing the details of this vulnerability make the problem smaller or bigger? We're talking big picture now.

It's been my experience that not releasing information on a vulnerability invariably results in a larger problem than the one we started with. This is primarily because the people who need to know about the vulnerability most -- the end users and system administrators -- aren't armed with the appropriate information to make informed decisions on when and how to update their systems.

In my book, that is unconscionable.

At the other extreme, it's also been my experience that spontaneous disclosure -- releasing everything about a vulnerability the moment or so that it's discovered -- also results in a larger problem than the one we started with. The end users and system administrators often don't have practical solutions or workarounds (for instance, turning off email is not an acceptable business solution in almost every case). Similarly, the product vendors are forced to slap together a quick patch that may or may not address the root cause (no pun intended) of the problem. We'll delve into this further in a moment...

So, both of these options are non-starters. If we accept these arguments, then it becomes a question of how we release information and what information we release. That's where my opinion differs from that of a lot of the practitioners out there.

There are a few published and ad hoc processes for responsible disclosure of vulnerability information. My biggest gripe with them is that they don't take into account the software engineering that needs to take place at the vendor level to appropriately address the problem. In particular, most call for a static, predetermined time period between notifying the product vendor and the public release of information about the vulnerability. That model is horribly flawed.

Setting a Deadline

My rationale is as follows. At the top-most level, software security defects fall into two general categories: design flaws and implementation bugs. It's the implementation bugs that we hear the most about in popular literature. They include buffer overflows, SQL insertion, cross-site scripting, and the like. The most common cause of these problems is inadequate filtering of user data inputs.

Many, but not all, implementation bugs can be fixed quite simply and easily. A poorly constructed string manipulation function in C, for example, can be made secure in just a line or two of remedial coding.

On the other hand, design flaws can be much more pernicious. The fix to a design flaw by its very definition requires the developer to change the application's design. A design change can have far-reaching ramifications. Think basic software engineering principles here.

To responsibly make a security change to an application's design requires the same degree of engineering scrutiny, testing, etc., that goes into designing the application in the first place, lest even nastier flaws (and perhaps even implementation bugs) appear as a result.

It all comes down to this... Some software defects can be fixed quickly and easily, while others can require great deals of engineering effort to be properly fixed. There is a broad spectrum of effort levels required to fix any particular vulnerability.

So, you see, setting an arbitrary time period for disclosing a vulnerability is not responsible at all.

Instead, the period of time should vary depending on the nature of the vulnerability itself. Forcing some time period into the process is like holding the proverbial gun to the head of the product vendor, and that can't possibly result in the sorts of patches we all want for our systems.

Even if you work with a product vendor to negotiate an appropriate amount of time for disclosing a vulnerability, the next issue in disclosing responsibly is what information to disclose. Most CERT-like organizations have formats for vulnerability advisories that do a good job here.

The most controversial topic here is how far to go in disclosing. For example, is it reasonable to disclose an example of how to exploit the vulnerability? Again, I look to my litmus test, and I err on the conservative side: It makes the overall problem bigger to disclose example exploit code. Now, I realize that a lot of you are hissing and spitting at me right now, and I can accept that criticism. My opinion is unchanged by it, however.

There are good ways and bad ways of disclosing vulnerability information. If we all share the goal of having secure applications that have had their known defects fixed, then it's in our collective best interest to allow the product vendor the necessary time to properly engineer and test security patches. Otherwise, we're condemned to an existence of getting security patches that have been developed under duress, cause other problems, or just plain old don't work right.

Come to think of it, that is a pretty accurate description of many of the patches that we see from all too many of our software product vendors, and that's no coincidence.






0 Comments (click to add your comment)
Comment and Contribute

 


(Maximum characters: 1200). You have characters left.