It's not everyday that there is a public security exploit published for the Linux kernel, yet that is what happened in early July. Though the flaw itself was patched in the mainline Linux kernel several weeks prior to the public exploit code being published, not all users may have patched. It could have been a lot worse.
The issue of patching aside, the public exploit could easily have been a zero day exploit on the Linux kernel itself, were it not for the fact that the bug that enables the exploit was caught by a scan from code scanning vendor Coverity. The Linux kernel has been actively scanned by Coverity since at least 2004 in an effort to find bugs and improve code quality.
"Our builds were broken in February and March so we didn't see it immediately when the code was first committed," David Maxwell, open source strategist for Coverity told InternetNews.com "But we've had it flagged in the system since March and it was fixed on the fifth of July."
The public exploit was published on July 17th.
The actual flaw exploit involves a number of components including a null pointer defect, which is a type of code flaw that Coverity scans for. A Null pointer typically leads to a system crash, but this particular one could have been used in concert with code compiler optimization, enabling an attacker to take control of certain memory blocks on the target computer.
In addition to fixing the Null pointer on July 5th, Maxwell noted that on July 16th, there was a code commit to the Linux kernel to disable the specific compilation optimization option, to help further ensure that similar exploit vectors are blocked.
Coverity's code scanning system, called Scan, identifies software defects such as null pointer errors, which are relatively common in open source software.
In 2006, Coverity began a multi-year effort to scan over two hundred open source software applications originally sponsored by the US Department of Homeland Security. In 2008, Coverity reported that Null pointer errors were the most common type of error found in the open source applications they scanned, representing nearly 28 percent of all bugs founds.
Not all bugs are security exploits though.
Maxwell commented that it's difficult to come up with a ratio of how many bugs there are in code, versus how many vulnerabilities, since many exploits depend on the larger application environment.
"People with an engineering mindset tend to break things down into little pieces for analysis where part A plugs into part B and then into Part C," Maxwell said. "The nature of security issues is that they are system problems. They have to be looked at as, A plus B plus C as the full interaction. So if you try and ask how many part A's lead to defects, it's a hard ratio to figure out."
One thing that Maxwell is certain of, is the need to continuously scan code bases as applications continue to develop and grow.
Coverity is set to release a new version of its full Scan report later this year which will detail the overall progress and trends they've seen in open source code.
"Over the period of about two years we saw about 153 percent gain in the number of additional defects from the original scan, as people committed new code," Maxwell said.
Maxwell commented that a few years ago, people might have questioned the value of continuing to scan the same projects over and over, after all the initial defects were found and fixed.
"We've definitely seen that as you continue to scan new code that comes in, we continue to find issues like this recent Linux security issue," he said.
Article courtesy of InternetNews.com.