Tuesday, March 19, 2024

Has Julian Haight Gone Straight?

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

One of the most famous, or infamous, groups that try to “block” spam-sending
servers is SpamCop.net, directed by Julian Haight. Because SpamCop was
purchased last year by IronPort Systems, a maker of e-mail server appliances
and antispam solutions, I thought it would be interesting to see whether any of
the controversies that swirled around Haight in the past have been worked out.

Haight agreed to an interview at a coffeeshop in Lake Forest Park, Wash.,
a suburb of Seattle near his home. His comments shed light on the potential of
— as well as the problems with — the “blocklist” approach to
stopping spam.

The SpamCop Argument In a Nutshell

SpamCop relies on a network of end users and automated programs that send in
complaints. These reports indicate that e-mail considered to be spam is coming
from certain Internet Protocol addresses. SpamCop computes a score for each
complaint and uses the scores to post an “IP address blocklist.” This list is
checked in real time by some corporate mail administrators to determine whether
or not to accept e-mail from certain senders.

• Legitimate or Not Legitimate?
SpamCop gained notoriety in late 2002 from a well-publicized run-in with
Declan McCullagh, a News.com contributor and editor of Politech, a political
discussion list. McCullagh reported at the time that SpamCop had erroneously
put Politech’s e-mail server on the blocklist
three
times
within the space of a few months. McCullagh also accused Haight
of deliberately putting
competing
antispam services
on his blocklist.

• A Wave of Accusations.
These articles led many other Internet users to suggest that certain things
about SpamCop smelled bad. In February 2003, an analysis of the alleged
failings of SpamCop was
posted
by Jeremy Howard, founding director of FastMail.fm, a company described by the
Sunday Times of London as “one of the slickest, most powerful e-mail systems on
the planet.” Howard charged that not only did SpamCop’s blocking list contain
inaccuracies, but that a single complaint could cause a small e-mail service
to be labeled a spammer.

• Good For Something or Good For Nothing?
The rhetoric around the blocklist grew so heated that Ray Everett-Church,
a respected antispam authority and board member of
CAUCE (Coalition Against
Unsolicited Commercial Email) wrote that SpamCop was
a continuing
embarrassment
to those engaged in responsible anti-spam efforts.”

At the time, Haight wrote
responses
to Politech’s and Howard’s criticisms. These responses engendered more
responses, which finally petered out. No one seemed satisfied.

IronPort Picks Up SpamCop As An Asset

Because the one-year anniversary of IronPort’s purchase of SpamCop on June 24,
2003, is coming up, I thought some improvement might now be detectable in the
blocklist due to its fortuitous infusion of cash. In my recent interview,
Haight confirmed that he now is able to pay three assistants who had tried to
manage the flow of spam reports on a mostly unpaid basis in SpamCop’s
earlier days.

SpamCop is currently getting spam reports from a network of between
30,000 and 40,000 end users of its system, Haight says.

Unfortunately, end users are notoriously poor at restricting their spam
complaints to truly unsolicited bulk e-mail (UBE). All too many complaints
involve legitimate, requested mailings that the recipient simply didn’t like
or no longer wishes to receive.

One
study of this phenomenon was released in March by AWeber
Communications, an e-mail publishing service. It studied 22,000 AOL users who’d
subscribed to legitimate e-mail newsletters through AWeber. After 60 days, 2.1%
of the subscribers had clicked AOL’s “Report Spam” button to unsubscribe.
Even subscribers who’d been required to re-confirm their initial signups
(using the so-called double opt-in method) clicked the “Report Spam” button
in 1.4% of the cases. Regardless of the fact that the newsletters studied
were entirely permission-based, approximately 400 spam complaints to AOL
were wrongly generated by the recipients in just two months.

Haight has adjusted to this trigger-happiness by gradually giving much less
weight to his end-users’ complaints. Out of the 1.5 million reports each day
that SpamCop currently receives, Haight says, “80 to 90 percent are [now]
generated by spam traps.”

Spam traps are e-mail addresses that SpamCop has posted on Web pages but have
never been used to send ordinary e-mail. When such an address receives e-mail,
therefore, it’s presumed that the message had to have come from a spammer.
(Senders of UBE often use “harvesting” software that robotically captures
addresses by scanning Web sites.)

According to SpamCop’s current
FAQ page, reports from spam-trap addresses
are given at least five times the points in the blocklist’s scoring process
as reports from individuals. It’s much easier for him to automate the handling
of reports from spam traps, too, Haight says.

How Accurate Should a Blocklist Be?

IronPort uses SpamCop’s historical database of the last 30 days of spam
complaints, Haight says, mostly to evaluate applicants who pay to be listed in
IronPort’s own “whitelist.” That list, known as Bonded Sender, was recently
selected for use by Hotmail.

“As far as the SpamCop blacklist, I’m still pretty much the owner of that,”
Haight explains. “There’s some pressure from IronPort to improve that process,
but what we’re doing now is the best that we can expect.”

News.com’s McCullagh didn’t respond to a request for comment for this article.
But SpamCop critic Howard was happy to speak on the record about the
controversial blocklist.

Surprisingly, Howard isn’t universally negative about SpamCop. He’s actually
quite effusive about the list of Web sites advertised in UBE that SpamCop
compiles. This list, in turn, is organized into an online database by
SURBL.org, a service that’s
otherwise unconnected with SpamCop. E-mails containing links to sites that have
previously been advertised in UBE, Howard says, have a high probability of
being spam.

What Howard objects to is the use of SpamCop’s blocklist, which he considers
inaccurate, as a kind of yes/no Magic 8-Ball. “My criticism of the
SpamCop blocklist is using it as a blocking list,” he says.
“That’s a bad idea, because it has a large number of false positives.” SpamCop
only works well, Howard explains, when it’s just one among many factors that
computes a probability score for suspicious e-mails. That approach is used by
SpamAssassin, a
popular open-source spam filter, and others.

Should You Use SpamCop And, If So, How Much?

SpamCop’s own FAQ
text seems to agree that its blocklist should be taken in
limited doses. “The SpamCop Blocking List history should be used as a small
item of interest in a larger investigation,” it reads. The text goes on to
name several other antispam services whose databases can be employed together
with SpamCop in various combinations.

Unfortunately, the main
how-to page at SpamCop provides mail administrators only
with instructions to configure the blocklist as an absolute yes/no system.
“Probably 99 percent” of mail admins who use SpamCop, Haight says, configure
it in this way — as a pure pass/fail test.

When asked why his site doesn’t recommend using SpamAssassin or some
other tool that can weigh SpamCop’s blocklist as one among many factors in
scoring mail as possible spam, Haight replies, “I’ve never had anyone ask me.”

Conclusion

SpamCop’s site clearly states that its blocklist “should not be used in a
production environment where legitimate email must be delivered.” I’m forced
to agree with this advice. Because of its many problems, I don’t recommend
that any company rely upon the SpamCop blocklist.

Better spam-blocking tools are clearly available. An exhaustive review of 27
enterprise-level antispam solutions was published on May 10 by Ron Anderson of
Network Computing. Of the top 10 products — evaluated on their accuracy,
managability, price and other factors — the Editor’s Choice went to
Barracuda Networks’ Spam Firewall, a network appliance. The testers found that
the product had very good spam detection and the lowest overall cost of
any contender: only $0.27 per user per year for 10,000 users.

Interestingly, IronPort’s own C60 antispam appliance also made it into the top
10. But it was scored only as high as ninth place and had the second-highest
cost in the group: $11.14/user/year for 10,000 users.

For the complete results of the tests, see Anderson’s
review.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles