Datamation content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
It’s not everyday that there is a public security exploit published for the Linux kernel, yet that is what happened in early July. Though the flaw itself was patched in the mainline Linux kernel several weeks prior to the public exploit code being published, not all users may have patched. It could have been a lot worse.
The issue of patching aside, the public exploit could easily have been a zero day exploit on the Linux kernel itself, were it not for the fact that the bug that enables the exploit was caught by a scan from code scanning vendor Coverity. The Linux kernel has been actively scanned by Coverity since at least 2004 in an effort to find bugs and improve code quality.
“Our builds were broken in February and March so we didn’t see it immediately when the code was first committed,” David Maxwell, open source strategist for Coverity told InternetNews.com “But we’ve had it flagged in the system since March and it was fixed on the fifth of July.”
The public exploit was published on July 17th.
The actual flaw exploit involves a number of components including a null pointer defect, which is a type of code flaw that Coverity scans for. A Null pointer typically leads to a system crash, but this particular one could have been used in concert with code compiler optimization, enabling an attacker to take control of certain memory blocks on the target computer.
In addition to fixing the Null pointer on July 5th, Maxwell noted that on July 16th, there was a code commit to the Linux kernel to disable the specific compilation optimization option, to help further ensure that similar exploit vectors are blocked.
Coverity’s code scanning system, called Scan, identifies software defects such as null pointer errors, which are relatively common in open source software.
In 2006, Coverity began a multi-year effort to scan over two hundred open source software applications originally sponsored by the US Department of Homeland Security. In 2008, Coverity reported that Null pointer errors were the most common type of error found in the open source applications they scanned, representing nearly 28 percent of all bugs founds.
Not all bugs are security exploits though.
Maxwell commented that it’s difficult to come up with a ratio of how many bugs there are in code, versus how many vulnerabilities, since many exploits depend on the larger application environment.
“People with an engineering mindset tend to break things down into little pieces for analysis where part A plugs into part B and then into Part C,” Maxwell said. “The nature of security issues is that they are system problems. They have to be looked at as, A plus B plus C as the full interaction. So if you try and ask how many part A’s lead to defects, it’s a hard ratio to figure out.”
One thing that Maxwell is certain of, is the need to continuously scan code bases as applications continue to develop and grow.
Coverity is set to release a new version of its full Scan report later this year which will detail the overall progress and trends they’ve seen in open source code.
Additional defects pile up
“Over the period of about two years we saw about 153 percent gain in the number of additional defects from the original scan, as people committed new code,” Maxwell said.
Maxwell commented that a few years ago, people might have questioned the value of continuing to scan the same projects over and over, after all the initial defects were found and fixed.
“We’ve definitely seen that as you continue to scan new code that comes in, we continue to find issues like this recent Linux security issue,” he said.
Article courtesy of InternetNews.com.
-
Ethics and Artificial Intelligence: Driving Greater Equality
FEATURE | By James Maguire,
December 16, 2020
-
AI vs. Machine Learning vs. Deep Learning
FEATURE | By Cynthia Harvey,
December 11, 2020
-
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
-
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
-
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
-
Top 10 AIOps Companies
FEATURE | By Samuel Greengard,
November 05, 2020
-
What is Text Analysis?
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
-
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
-
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
-
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
-
Top 10 Chatbot Platforms
FEATURE | By Cynthia Harvey,
October 07, 2020
-
Finding a Career Path in AI
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
-
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
-
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
-
Top 10 Machine Learning Companies 2021
FEATURE | By Cynthia Harvey,
September 22, 2020
-
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
-
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
-
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
-
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
-
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
SEE ALL
ARTICLES