Just a week or two ago, I heard about a security conference where the organizers inadvertently distributed a virus-infected USB stick to the attendees. Of course, everyone was shocked and amazed, and Im sure the organizers put the fire out quickly and professionally.
(I wont bother naming/embarrassing the conference because, lets face it, it could have been any of us, right? There but for the grace of )
When I heard about the infection, I immediatelyok, after I stopped gigglingthought back to similar incidents with tainted commercial software media circa 1990 or so.
I asked myself: how could this happen in 2008?
Seriously, how could this happen in 2008? Was it an act of ignorance?
I think not, these guys are truly good, despite the momentary lapse. Was it an act of hubris? I seriously doubt it? Complacency, perhaps? I just dont know, but I think the situation is worth exploring a little deeper, and worth looking beyond this one unfortunate incident.
If youve ever sat through one of the classes I teach, youd probably recognize a Keynote slide I use often. The slide contains an old photo of the Tacoma Narrows Bridge collapsing in November 1940 (Watch the YouTube video here). The caption I use reads, Were really bad at learning from history.
We are really bad. I cant think of a single other discipline that is so gosh darned pathetic at learning from its mistakes. Perhaps its my engineering degree and background. Perhaps its from growing up around the aviation industry, with a retired 747 pilot for a father. But Ive watched other disciplines and seen that those guys study their failures and improve from them.
Isnt that a novel concept?
Think about it for a bit in the context of information security. The world saw a major buffer overflow in a C program on November 2nd, 1988 in the form of the so-called Internet Worm. The buffer overflow was in the Berkeley UNIX finger daemon program, and it and a couple other problems enabled Robert T. Morriss worm program to rapidly spread across the Internet.
Just a few months later, the Communications of the ACM, a highly respected academic journal, published an entire edition devoted to analysis of the worm and its aftermath. I remember working at another university at the time and thinking, this is great stuff; now we all understand buffer overflows and wont see any more of those in our software.
Boy, was I naïve.
In a more modern context, consider cross-site scripting (or XSS) attacks on web applications, SQL injection, or just about any of the OWASP Top-10 list of web application vulnerabilities (see http://www.owasp.org).
We keep making the same mistakes over and over. How can that be?
Are we even too stupid to come up with new mistakes? I sure hope not.