Tuesday, March 19, 2024

Does Heartbleed Disprove ‘Open Source is Safer’?

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The discovery of the Heartbleed bug sent service providers scrambling to patch their versions of OpenSSL and customers to change their compromised passwords. The affect was so widespread that Heartbleed is widely considered as the worst security bug ever to hit the Internet.

As security expert Bruce Schneier wrote, “‘Catastrophic’ is the right word. On the scale of 1 to 10, this is an 11.”

Almost as devastating, however, is the blow Heartbleed has dealt to the image of free and open source software (FOSS). In the self-mythology of FOSS, bugs like Heartbleed aren’t supposed to happen when the source code is freely available and being worked with daily.

Or, as Eric Raymond famously said, “given enough eyeballs, all bugs are shallow.”

Yet, somehow, Heartbleed appears to have existed for over two years before being discovered. It may even have been used by American security agencies in their surveillance of the public.

Tired of FOSS’s continual claims of superior security, some Windows and OS X users welcome the idea that Heartbleed has punctured FOSS pretensions. But is that what has happened? To what extent does Heartbleed challenge or re-affirm FOSS’ belief that it represents a superior method of software development?

The Original Statement

Raymond made his famous statement in his 1999 book The Cathedral and the Bazaar. A comparison of proprietary and FOSS methods of software development, the book summarizes the beliefs of many FOSS developers – then and now – about why their work habits are supposed to produce higher quality software with fewer bugs.

Implicit in the description is not only the idea that peer review can substitute for software testing, but also that no special effort is needed to detect bugs. Simply by going about their business as developers, FOSS project members are likely to notice bugs so that they can be repaired.

This claim has not gone unchallenged. It is a statement of belief, not the conclusion of a scientific study, a rationalization of the fact that peer review in FOSS has always been easier than software testing. Moreover, in Facts and Fallacies about Software Engineering, Robert L. Glass claims that no correlation exists between the number of bugs reported and the number of reviewers.

Yet despite the claim’s weaknesses, it remains one of FOSS’s major assertions of superiority. Heartbleed seems an exception that at least challenges the widely believed rule, or maybe even overturns it completely.

The Problems with Eyeballs

At first glance, Raymond’s statement seems to survive any challenge from Heartbleed. Unproved or not, the statement is conditional; it is only true if enough eyes are constantly on the code. However, as the idea is examined, the flaws and unstated assumptions start to reveal themselves.

Robin Seggelmann, the OpenSSL developer who claims responsibility for Heartbleed, says that both he and a reviewer missed the bug. He concludes that more reviewers are needed to avoid a repetition of the incident — that there were not enough eyes in this case.

Another conclusion that might be drawn from Seggelmann’s account is that depending on developers to review their own work is not a good idea. Unless considerable time passes between the writing of the code and the review, the developers are probably too close to the code to be likely to observe the flaws in it.

However, the weakness of Seggelmann’s perspective is that the argument is circular: if Heartbleed was undiscovered, then there must not have been enough eyes on the code. The proof is in the discovery or the failure to discover, which is not exactly a useful argument.

A more useful analysis has been offered by Theo de Raadt, the founder of OpenBSD and OpenSSH. De Raadt notes that malloc, a memory allocation library, was long ago patched to prevent Heartbleed-type exploitations. However, at the same time, OpenSSL added “a wrapper around malloc & free so that the library will cache memory on its own, and not free it to the protective malloc” — all in the name of improving performance on some systems.

In other words, the potential for a bug was detected and patched, but was by-passed by an engineering decision that favored efficiency over security. Perhaps, too, the wrapper was never examined closely because it was assumed to be trivial and to add nothing new. It had become an established part of the code that nobody was likely to modify. But, whatever the case, de Raadt concludes scathingly, “OpenSSL is not developed by a responsible team.”

Assuming that de Raadt is right, then one take-away for FOSS is that all the eyes in the world cannot be counted on to catch basic design problems.

Taken together, Segglemann’s and de Raadt’s comments also suggest that assuming no special effort is needed to discover bugs is a mistake. Perhaps more attention needs to be paid to formal reviews and software testing than FOSS traditionally has managed. The fact that FOSS development often involves remote cooperation does not mean that log-in test or in-person testing sessions could not be added to many project’s development cycle.

What Heartbleed proves is that FOSS needs at to examine the unexamined assumption it has held for years. Greg DeKoenigsberg, a vice president at Eucalyptus Systems, summed up the situation neatly on Facebook: “we don’t put enough eyes in the right places, because we assume [bug-detection] will just happen because of open source pixie dust — and now we’re paying the price for it.”

Redemption by Response

None of these comments are meant to suggest that the entire FOSS development model requires revision. If Heartbleed challenges Raymond’s statement about enough eyes, the response to Heartbleed more than justifies it.

Knowledge of Heartbleed was apparently concealed for several weeks, but once it was announced, FOSS-based projects and sites quickly publicized it. A few hours more, and it was being patched. Individual users, of course, still need to change their passwords after sites apply their patches, but while some of the effects could linger for months, the FOSS response could not have been quicker or more responsible once the discovery was general knowledge.

By contrast, imagining a similar response from proprietary software is almost impossible. Based on past revelations of bugs and malware, a more likely reaction from proprietary software would have been to keep the problem secret while a patch was written and tested so that no one could exploit it. Meanwhile, millions of users would have remained exposed for weeks or months without realizing the danger.

Heartbleed is forcing another look at one of FOSS’ basic beliefs, but the reaction to it is proving FOSS’ ability to respond in a crisis. In the short run, FOSS will face ridicule because of its failure to detect Heartbleed earlier. Yet, already, the challenge to FOSS’ basic beliefs is proving the ability of its developers to learn from their mistakes and improve.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles