Secure Programming: the Seven Pernicious Kingdoms: Page 3

(Page 3 of 7)

1.3 The Quality Fallacy

Anyone who has ever written a program knows that mistakes are inevitable. Anyone who writes software professionally knows that producing good software requires a systematic approach to finding bugs. By far the most widely used approach to bug finding is dynamic testing, which involves running the software and comparing its output against an expected result. Advocates of extreme programming want to see a lot of small tests (unit tests) written by the programmer even before the code is written. Large software organizations have big groups of dedicated QA engineers who are responsible for nothing other than writing tests, running tests, and evaluating test results.

If you’ve always thought of security as just another facet of software quality, you might be surprised to learn that it is almost impossible to improve software security merely by improving quality assurance. In practice, most software quality efforts are geared toward testing program functionality. The purpose is to find the bugs that will affect the most users in the worst ways. Functionality testing works well for making sure that typical users with typical needs will be happy, but it just won’t work for finding security defects that aren’t related to security features. Most software testing is aimed at comparing the implementation to the requirements, and this approach is inadequate for finding security problems.

The software (the implementation) has a list of things it’s supposed to do (the requirements). Imagine testing a piece of software by running down the list of requirements and making sure the implementation fulfills each one. If the software fails to meet a particular requirement, you’ve found a bug. This works well for testing software functionality, even security functionality, but it will miss many security problems because security problems are often not violations of the requirements. Instead, security problems are frequently “unintended functionality” that causes the program to be insecure.

secure programming code

"Secure Programming With Static Analysis" learn more

Ivan Arce, CTO of Core Security Technologies, put it like this: Reliable software does what it is supposed to do. Secure software does what it is supposed to do, and nothing else.

The following JSP fragment demonstrates this phenomenon. (This bit of code is from Foundations of AJAX [Asleson and Schutta, 2005].) The code accepts an HTTP parameter and echoes it back to the browser.

Hello ${}!

This code might meet the program’s requirements, but it also enables a cross-site scripting attack because it will echo any string back to the browser, including a script written by an attacker. Because of this weakness, unsuspecting victims could click on a malicious link in an email message and subsequently give up their authentication credentials to an attacker. (See Chapter 9, “Web Applications,” for a complete discussion of cross-site scripting.) No amount of testing the intended functionality will reveal this problem.

A growing number of organizations attempt to overcome the lack of focus on security by mandating a penetration test. After a system is built, testers stage a mock attack on the system. A black-box test gives the attackers no information about how the system is constructed. This might sound like a realistic scenario, but in reality, it is both inadequate and inefficient. Testing cannot begin until the system is complete, and testers have exclusive access to the software only until the release date. After the release, attackers and defenders are on equal footing; attackers are now able to test and study the software, too. The narrow window means that the sum total of all attackers can easily have more hours to spend hunting for problems than the defenders have hours for testing. The testers eventually move on to other tasks, but attackers get to keep on trying. The end result of their greater investment is that attackers can find a greater number of vulnerabilities.

Black-box testing tools try to automate some of the techniques applied by penetration testers by using precanned attacks. Because these tools use close to the same set of attacks against every program, they are able to find only defects that do not require much meaningful interaction with the software being tested. Failing such a test is a sign of real trouble, but passing doesn’t mean very much; it’s easy to pass a set of precanned tests.

Another approach to testing, fuzzing, involves feeding the program randomly generated input [Miller, 2007]. Testing with purely random input tends to trigger the same conditions in the program again and again, which is inefficient. To improve efficiency, a fuzzer should skew the tests it generates based on knowledge about the program under test. If the fuzzer generates tests that resemble the file formats, protocols, or conventions used by the target program, it is more likely to put the program through its paces. Even with customization, fuzzing is a time-consuming process, and without proper iteration and refinement, the fuzzer is likely to spend most of its time exploring a shallow portion of the program’s state space.

Page 3 of 7

Previous Page
1 2 3 4 5 6 7
Next Page

0 Comments (click to add your comment)
Comment and Contribute


(Maximum characters: 1200). You have characters left.