How long does it take to spot a bug in an operating system? The answer, it seems, can be as long as 33 years. At least, that was the case with a recently discovered bug in the yacc parser generator originally developed at AT&T back in the 1970s and discovered recently by OpenBSD developer Otto Moerbeek.
"Funny thing is that I traced this back to Sixth Edition Unix, released in 1975," says Otto Moerbeek, the OpenBSD developer who discovered the bug.
This makes the 25 year old bug BSD bug discovered a couple of months ago by Marc Balmer, another OpenBSD developer, seem comparatively young. That particular bug, which he discovered when investigating mysterious SAMBA crashes, can be traced all the way back to 4.2BSD released in 1983, Balmer says.
This illustrates rather nicely the fact that every operating system, however venerable, still has plenty of bugs waiting to be found: Every non-trivial body of code is bound to. No matter how many eyes review the code, many of these bugs will not be spotted until the code is examined in the context of its interaction with another piece of code. All this is a fancy way of saying that Harry's not a problem by himself, and Sally's not a problem by herself. It's only when Harry meets Sally that there's really a problem. And if Sally hasn't been born yet, well then how is anyone to spot that anything is going to be amiss?
This has some obvious implications for security. No matter how tried and tested an operating system, no matter how open the source code, no matter how well it is reviewed, we can be sure it will always have a few critical vulnerabilities that haven't yet popped their nasty little heads above the parapet. So Microsoft's Windows Server 2008 code has been tried and tested in Vista (with which it shares the same codebase) for 18 months? It's a start, but there will still be bugs in there waiting to be found in 18 years. And 33 years.
When it comes to operating systems, the best advice is probably: Trust none. Suspect them all. And patch immediately.
But even if we reviewed all the code running on a machine OS and applications and found every single bug, would it really help? Whatever the operating system, it still has to run on something. Independent security researcher Kris Kaspersky reckons flaws in Intel's chips known as errata can be huge security vulnerabilities in and of themselves. He says the Intel Itanium, for example, has over known 230 bugs. He plans to demonstrate some attacks at October's Hack In The Box conference in Malaysia in a presentation called "Remote Code Execution Through Intel CPU Bugs."
"Some bugs 'just' crash the system (under quite rare conditions) while the others give the attackers full control over the machine," he says in his presentation abstract. "In other words, Intel CPUs have exploitable bugs which are vulnerable to both local and remote attacks which works against any OS regardless of the patches applied or the applications which are running."
Kaspersky may have developed his proof of concept code to work on Intel chips because they are ubiquitous, but you can bet your bottom dollar there are plenty of exploitable errata on any other chip you'd care to mention. It's just that since they're not as widely used as Intel chips, Kaspersky (or anyone else) hasn't got around to writing exploits for these chips. Yet.
So, hats off to OpenBSD developers Otto Moerbeek and Marc Balmer then, for getting to the bottom of the two bugs many years after the seeds for them were sown. Branding the OpenBSD crowd "a bunch of masturbating monkeys," for concentrating too much on security bug, as Linus Torvalds reportedly did last week, does seem a trifle harsh.
Paul Rubens is an IT consultant and journalist based in Marlow on Thames, England. He has been programming, tinkering and generally sitting in front of computer screens since his first encounter with a DEC PDP-11 in 1979.
This article was first published on ServerWatch.com.