In this case, rather than use celebrities or the allure of sex to attract readers, it relies on the most watched newsmakers of the last few years: the weather. And as with other self-propagating worms, it reads contact lists (e.g., Outlook address books) to spread.
But there is an added twist to this one: it also propagates by adding a link into forum posts and/or blogs that you normally contribute to using the HTTP protocol. Note that this worm/trojan pays attention to the protocol rather than just attaching itself to a specific application.
It is this intelligent aspect of the worm that makes it somewhat unique. The potential is huge for a repeat of the infamous Melissa and I Love You outbreaks that ravaged systems in 2000.
One of the challenges of dealing with these sorts of issues dynamic variables that might catch your anti-virus software off guard is to remain proactive in ones awareness of threats. The biggest challenge with most anti-virus software is that it's reactive. A virus is found, a new definition created and you have to ensure you have downloaded the latest definition to remain protected.
This methodology used to be sufficient for most environments but this is no longer the case. Attacks are no longer specific to a geographic region thanks to the regular use of botnets. They are also more likely to rely on multi-protocol attack types, heavy usage of social engineering as well as increasing use of peer-to-peer protocols. While users are savvier about various phishes and attack types, there are enough variations out there to catch even the most savvy of individuals off-guard.
Technology has to meet the changing needs of communications today as well as the attacks against those types communications. A few years ago, email was just, well, email. Today it's enhanced heavily with HTTP/HTML.
As much as I would love to see a return to plain ol email messages the reality is that enhanced email is here to stay. Addressing the issues found within those protocols has to be more readily done online as packets travel.
I got to speak with Dmitri Alperovitch, Research Scientist for TrustedSource. The computer security firm has a new way of thinking when it comes to dealing with attacks. Rather than wait until an attack has fully propagated before identifying it and it's path, TrustedSource, through a network of thousands of systems, is able to monitor traffic running on the Internet and rate it much like a credit rating system.
Traffic gets rated on over 1,000 different characteristics, including but not limited to volume, number of individuals a message was sent to, how often it was sent out, etc. This first line of defense, to determine the legitimacy of a sender, is excellent at eliminating the majority of phishing, scam and virus attacks.
What I found most interesting was the free dashboard at http://www.trustedsource.org, where one can check individual addresses to determine their rating.
I checked out how AntiOnline rated (Enterprise IT Planets security discussion forums) and found our rating quite good. There is to be expected, for certain IP addresses, mail traffic but when you see unusual spikes or unusual traffic patterns it may be an indication of questionable traffic. Some traffic spikes can be due to political, environmental or other news worthy events, but those are easily identifiable.
TrustedSource estimates that about 450,000 new zombies pop up each day. This is a rather disturbing statistic as it opens the door for a great amount of propagation as well as being an indication of how many people may be unaware they are infected. We would hope that with the wealth of readily available computer knowledge that this lack of awareness would be a thing of the past, but I believe this just highlights the effectiveness of social engineering methods used by attackers.
TrustedSource adjusts in real-time so that it's constantly being updated and can tailor that information for the Secure Computing applications found within enterprises. You can think of it as a global IDS or early-warning system and is far more proactive compared to traditional anti-virus and other malware busting software. This kind of protection is something that ISPs can use to limit the amount of bad traffic on the Internet today. I, for one, would love to see this so that this kind of activity can be throttled and stopped before it even comes close to the end user.
What is most impressive is low key this has been in the industry. TrustedSource has been collecting data from traffic in Secure Computing environments for over 5 years so its ability to judge and determine legitimate versus questionable data is fairly good. There is enough of a sample for the majority of traffic out there so that fuzzy areas are far and few between. And because the overhead is minimal on existing network traffic a series of simple queries it means that it'll be non-intrusive to legitimate traffic bandwidth.
In fact, a single query can block 60-80% of the bandwidth being siphoned off by questionable activities. This means better usage of network resources and cost savings. If you want to see how much you can potentially lose, check out the ROI calculator on the TrustedSource.org website. Heck, I even did the ROI for myself. With the amount of spam I get, I could save almost $1000 a year by using this kind of system!
However, because of TrustedSource's ability to tie into existing systems and its ability to perform second level analysis (e.g., content analysis, anti-spam algorithms, etc.) means that we get to see a more robust response to multi-protocol attacks. It's also nice that it can integrate into firewalls, gateways and even desktops so it's not limited to just one system type but rather all the system types that make up an environment. This means catching attacks as they start and stopping them before they escape an environment. Overall, it means isolating situations and avoiding a repeat of Melissa/I Love You all over again.
We cannot remain isolated in the networked world any more. We have to take ownership for our involvement in the capital-I Internet.