Why Are IT Security Pros so Bad?

Have you heard about the security conference where the organizers inadvertently distributed a virus-infected USB stick to the attendees?
(Page 1 of 2)

We really are idiots. Not you and me, of course, but we. Seriously, we are idiots. The sooner we accept that, the better off we’ll all be. I’ll explain…

Just a week or two ago, I heard about a security conference where the organizers inadvertently distributed a virus-infected USB stick to the attendees. Of course, everyone was shocked and amazed, and I’m sure the organizers put the fire out quickly and professionally.

(I won’t bother naming/embarrassing the conference because, let’s face it, it could have been any of “us,” right? There but for the grace of…)

When I heard about the infection, I immediately—ok, after I stopped giggling—thought back to similar incidents with tainted commercial software media circa 1990 or so.

I asked myself: how could this happen in 2008?

Seriously, how could this happen in 2008? Was it an act of ignorance?

I think not, these guys are truly good, despite the momentary lapse. Was it an act of hubris? I seriously doubt it? Complacency, perhaps? I just don’t know, but I think the situation is worth exploring a little deeper, and worth looking beyond this one unfortunate incident.

If you’ve ever sat through one of the classes I teach, you’d probably recognize a Keynote slide I use often. The slide contains an old photo of the Tacoma Narrows Bridge collapsing in November 1940 (Watch the YouTube video here). The caption I use reads, “We’re really bad at learning from history.”

We are really bad. I can’t think of a single other discipline that is so gosh darned pathetic at learning from its mistakes. Perhaps it’s my engineering degree and background. Perhaps it’s from growing up around the aviation industry, with a retired 747 pilot for a father. But I’ve watched other disciplines and seen that those guys study their failures and improve from them.

Isn’t that a novel concept?

Think about it for a bit in the context of information security. The world saw a major buffer overflow in a C program on November 2nd, 1988 in the form of the so-called “Internet Worm.” The buffer overflow was in the Berkeley UNIX finger daemon program, and it and a couple other problems enabled Robert T. Morris’s worm program to rapidly spread across the Internet.

Just a few months later, the Communications of the ACM, a highly respected academic journal, published an entire edition devoted to analysis of the worm and its aftermath. I remember working at another university at the time and thinking, “this is great stuff; now we all understand buffer overflows and won’t see any more of those in our software.”

Boy, was I naïve.

In a more modern context, consider cross-site scripting (or “XSS”) attacks on web applications, SQL injection, or just about any of the OWASP Top-10 list of web application vulnerabilities (see http://www.owasp.org).

We keep making the same mistakes over and over. How can that be?

Are we even too stupid to come up with new mistakes? I sure hope not.

Page 1 of 2

1 2
Next Page

Tags: video, security, software, virus, media

0 Comments (click to add your comment)
Comment and Contribute


(Maximum characters: 1200). You have characters left.