Tuesday, December 10, 2024

Secure Programming: the Seven Pernicious Kingdoms

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

“Success is foreseeing failure.”

—Henry Petroski

In this overview of Secure Programming with Static Analysis:

Introduction: Improving Software Security

Defensive Programming Is Not Enough

The Quality Fallacy

Static Analysis in the Big Picture

Classifying Vulnerabilities

The Seven Pernicious Kingdoms

Secure Programming: Summary

Improving Software Security: Introduction

We believe that the most effective way to improve software security is to study past security errors and prevent them from happening in the future. In fact, that is the primary theme of this book. In the following chapters, we look at a variety of programming tasks and examine the common security pitfalls associated with them. Our philosophy is similar to that of Henry Petroski: To build a strong system, you have to understand how the system is likely to fail [Petroski, 1985]. Mistakes are inevitable, but you have a measure of control over your mistakes. Although you can’t have precise knowledge of your next blunder, you can control the set of possibilities. You can also control where, when, and by whom your mistake will be found. This book focuses on finding mistakes that manifest themselves in source code. In particular, it concentrates on mistakes that lead to security problems, which can be both tricky to uncover and costly to ignore.

secure programming code

“Secure Programming With Static Analysis” learn more

Being aware of common pitfalls might sound like a good way to avoid falling prey to them, but awareness by itself often proves to be insufficient. Children learn the spelling rule “i before e except after c,” but widespread knowledge of the rule does not prevent believe from being a commonly misspelled word. Understanding security is one thing; applying your understanding in a complete and consistent fashion to meet your security goals is quite another. For this reason, we advocate static analysis as a technique for finding common security errors in source code. Throughout the book, we show how static analysis tools can be part of a strategy for getting security right.

The term static analysis refers to any process for assessing code without executing it. Static analysis is powerful because it allows for the quick consideration of many possibilities. A static analysis tool can explore a large number of “what if” scenarios without having to go through all the -computations necessary to execute the code for all the scenarios. Static analysis is particularly well suited to security because many security problems occur in corner cases and hard-to-reach states that can be difficult to exercise by actually running the code. Good static analysis tools provide a fast way to get a consistent and detailed evaluation of a body of code.

Advanced static analysis tools are not yet a part of the toolkit that most programmers use on a regular basis. To explain why they should be, we begin by looking at why some commonly used approaches to security typically fail. We discuss defensive programming, software security versus security features, and mistaking software quality efforts for software security efforts. Of course, no single tool or technique will ever provide a complete solution to the security problem by itself. We explain where static analysis fits into the big picture and then end the chapter by categorizing the kinds of mistakes that most often jeopardize software security.

1.1 Defensive Programming Is Not Enough

The term defensive programming often comes up in introductory programming courses. Although it is increasingly given a security connotation, historically it has referred only to the practice of coding with the mindset that errors are inevitable and that, sooner or later, something will go wrong and lead to unexpected conditions within the program. Kernighan and Plauger call it “writing the program so it can cope with small disasters” [Kernighan and Plauger, 1981]. Good defensive programming requires adding code to check one’s assumptions. The term defensive programming is apt, particularly in introductory programming courses, because often novice programmers are there own worst enemy; by and large, the defenses serve to reveal logic errors made by the programmer. Good defensive programming makes bugs both easier to find and easier to diagnose.

But defensive programming does not guarantee secure software (although the notion of expecting anomalies is very much a step in the right direction). When we talk about security, we assume the existence of an adversary—someone who is intentionally trying to subvert the system. Instead of trying to compensate for typical kinds of accidents (on the part of either the programmer or the user), software security is about creating programs that behave correctly even in the presence of malicious behavior. Consider the following C function that prints a message to a specified file descriptor without performing any error checking:

void printMsg(FILE* file, char* msg) { fprintf(file, msg); }

If either argument to this function is null, the program will crash. Programming defensively, we might check to make sure that both input parameters are non-null before printing the message, as follows:

void printMsg(FILE* file, char* msg) { if (file == NULL) { logError(“attempt to print message to null file”); } else if (msg == NULL) { logError(“attempt to print null message”); } else { fprintf(file, msg); } }

From a security perspective, these checks simply do not go far enough. Although we have prevented a caller from crashing the program by providing null values, the code does not account for the fact that the value of the msg parameter itself might be malicious. By providing msg as the format string argument to fprintf(), the code leaves open the possibility that an attacker could specify a malicious format string designed to carry out a format string attack. (Chapter 6, “Buffer Overflow,” discusses format string vulnerabilities in detail.) If an attacker can slip in a message that looks something like this, the attacker could potentially take control of the program:

secure programming code

“Secure Programming With Static Analysis” learn more

AAA1_%08x.%08x.%08x.%08x.%08x.%n

This attempt at defensive programming shows how a straightforward approach to solving a programming problem can turn out to be insecure. The people who created the programming languages, libraries, frameworks, protocols, and conventions that most programmers build upon did not anticipate all the ways their creations would be assailed. Because of a design oversight, format strings became an attack vector, and seemingly reasonable attempts at error handling turn out to be inadequate in the face of attack.

A security-conscious programmer will deprive an attacker of the opportunity this vulnerability represents by supplying a fixed format string.

void printMsg(FILE* file, char* msg) { if (file == NULL) { logError(“attempt to print message to null file”); } else if (msg == NULL) { logError(“attempt to print null message”); } else { fprintf(file, “%.128s”, msg); } }

In considering the range of things that might go wrong with a piece of code, programmers tend to stick with their experience: The program might crash, it might loop forever, or it might simply fail to produce the desired result. All of these failure modes are important, but preventing them does not lead to software that stands up to attack. Historically, programmers have not been trained to consider the interests or capabilities of an adversary. This results in code that might be well defended against the types of problems that a programmer is familiar with but that is still easy for an attacker to subvert.

1.2 Security Features != Secure Features

Sometimes programmers do think about security, but more often than not, they think in terms of security features such as cryptographic ciphers, passwords, and access control mechanisms. As Michael Howard, a program manager on the Microsoft Security Engineering Team, says, “Security features != Secure features” [Howard and LeBlanc, 2002]. For a program to be secure, all portions of the program must be secure, not just the bits that explicitly address security. In many cases, security failings are not related to security features at all. A security feature can fail and jeopardize system security in plenty of ways, but there are usually many more ways in which defective nonsecurity features can go wrong and lead to a security problem. Security features are (usually) implemented with the idea that they must function correctly to maintain system security, but nonsecurity features often fail to receive this same consideration, even though they are often just as critical to the system’s security.

Programmers get this wrong all the time; as a consequence, they stop thinking about security when they need to be focusing on it. Consider this misguided quote from BEA’s documentation for WebLogic [BEA, 2004]:

Since most security for Web applications can be implemented by a system administrator, application developers need not pay attention to the details of securing the application unless there are special considerations that must be addressed in the code. For programming custom security into an application, WebLogic Server application developers can take advantage of BEA-supplied Application Programming Interfaces (APIs) for obtaining information about subjects and principals (identifying information for users) that are used by WebLogic Server. The APIs are found in the weblogic.security package.

Imagine a burglar who wants to break into your house. He might start by walking up to the front door and trying to turn the doorknob. If the door is locked, he has run into a security feature. Now imagine that the door’s hinges are on the outside of the house. The builder probably didn’t think about the hinge in relation to security; the hinges are by no means a security feature—they are present so that the door will meet the “easy to open and close” requirement. But now it’s unlikely that our burglar will spend time trying to pick the lock or pry open the door. He’ll simply lift out the hinge bolts and remove the door. Home builders stopped making this mistake long ago, but in the world of software security, this sort of goof-up still happens on a remarkably regular basis.

Instead of discussing ways to implement security features or make use of prepackaged security modules or frameworks, we concentrate on identifying and avoiding common mistakes in code that are not necessarily related to any security feature. We occasionally discuss security features, but only in the context of common implementation errors.

1.3 The Quality Fallacy

Anyone who has ever written a program knows that mistakes are inevitable. Anyone who writes software professionally knows that producing good software requires a systematic approach to finding bugs. By far the most widely used approach to bug finding is dynamic testing, which involves running the software and comparing its output against an expected result. Advocates of extreme programming want to see a lot of small tests (unit tests) written by the programmer even before the code is written. Large software organizations have big groups of dedicated QA engineers who are responsible for nothing other than writing tests, running tests, and evaluating test results.

If you’ve always thought of security as just another facet of software quality, you might be surprised to learn that it is almost impossible to improve software security merely by improving quality assurance. In practice, most software quality efforts are geared toward testing program functionality. The purpose is to find the bugs that will affect the most users in the worst ways. Functionality testing works well for making sure that typical users with typical needs will be happy, but it just won’t work for finding security defects that aren’t related to security features. Most software testing is aimed at comparing the implementation to the requirements, and this approach is inadequate for finding security problems.

The software (the implementation) has a list of things it’s supposed to do (the requirements). Imagine testing a piece of software by running down the list of requirements and making sure the implementation fulfills each one. If the software fails to meet a particular requirement, you’ve found a bug. This works well for testing software functionality, even security functionality, but it will miss many security problems because security problems are often not violations of the requirements. Instead, security problems are frequently “unintended functionality” that causes the program to be insecure.

secure programming code

“Secure Programming With Static Analysis” learn more

Ivan Arce, CTO of Core Security Technologies, put it like this: Reliable software does what it is supposed to do. Secure software does what it is supposed to do, and nothing else.

The following JSP fragment demonstrates this phenomenon. (This bit of code is from Foundations of AJAX [Asleson and Schutta, 2005].) The code accepts an HTTP parameter and echoes it back to the browser.

Hello ${param.name}!

This code might meet the program’s requirements, but it also enables a cross-site scripting attack because it will echo any string back to the browser, including a script written by an attacker. Because of this weakness, unsuspecting victims could click on a malicious link in an email message and subsequently give up their authentication credentials to an attacker. (See Chapter 9, “Web Applications,” for a complete discussion of cross-site scripting.) No amount of testing the intended functionality will reveal this problem.

A growing number of organizations attempt to overcome the lack of focus on security by mandating a penetration test. After a system is built, testers stage a mock attack on the system. A black-box test gives the attackers no information about how the system is constructed. This might sound like a realistic scenario, but in reality, it is both inadequate and inefficient. Testing cannot begin until the system is complete, and testers have exclusive access to the software only until the release date. After the release, attackers and defenders are on equal footing; attackers are now able to test and study the software, too. The narrow window means that the sum total of all attackers can easily have more hours to spend hunting for problems than the defenders have hours for testing. The testers eventually move on to other tasks, but attackers get to keep on trying. The end result of their greater investment is that attackers can find a greater number of vulnerabilities.

Black-box testing tools try to automate some of the techniques applied by penetration testers by using precanned attacks. Because these tools use close to the same set of attacks against every program, they are able to find only defects that do not require much meaningful interaction with the software being tested. Failing such a test is a sign of real trouble, but passing doesn’t mean very much; it’s easy to pass a set of precanned tests.

Another approach to testing, fuzzing, involves feeding the program randomly generated input [Miller, 2007]. Testing with purely random input tends to trigger the same conditions in the program again and again, which is inefficient. To improve efficiency, a fuzzer should skew the tests it generates based on knowledge about the program under test. If the fuzzer generates tests that resemble the file formats, protocols, or conventions used by the target program, it is more likely to put the program through its paces. Even with customization, fuzzing is a time-consuming process, and without proper iteration and refinement, the fuzzer is likely to spend most of its time exploring a shallow portion of the program’s state space.

1.4 Static Analysis in the Big Picture

Most software development methodologies can be cast into some arrangement of the same four steps:

1. Plan—Gather requirements, create a design, and plan testing.

2. Build—Write the code and the tests.

3. Test—Run tests, record results, and determine the quality of the code.

4. Field—Deploy the software, monitor its performance, and maintain it as necessary.

Different methodologies place a different amount of emphasis on each step, sometimes iterating through many cycles of a few steps or shrinking steps as a project matures, but all commonly practiced methodologies, including the waterfall model, the spiral model, extreme programming, and the Rational Unified Process, can be described in this four-step context.

No matter what methodology is used, the only way to get security right is to incorporate security considerations into all the steps.

Historically, the symptoms of bad software security have been treated as a field problem to be solved with firewalls, application firewalls, intrusion detection systems, and penetration testing. Figure 1.2 illustrates this late-in-the-game approach. The problem is, it doesn’t work. Instead, it creates a never-ending series of snafus and finger pointing. The right answer, illustrated in Figure 1.3, is to focus efforts on the cause of most software security problems: the way the software is constructed. Security needs to be an integral part of the way software is planned and built. (It should continue to be part of testing and fielding software, too, but with a diminished emphasis.)

Gary McGraw estimates that roughly half of the mistakes that lead to security problems are implementation oversights, omissions, or misunderstandings [McGraw, 2006]. The format string and cross-site scripting problems we’ve already looked at both fall into this category. These are exactly the kinds of problems that a code review is good at flushing out. The down side is that, to find security problems during a code review, you have to be able to identify a security problem when you see one, and security mistakes can be subtle and easy to overlook even when you’re staring at them in the source code. This is where static analysis tools really shine. A static analysis tool can make the code review process faster and more fruitful by hypothesizing a set of potential problems for consideration during a code review.

secure programming code

“Secure Programming With Static Analysis” learn more

If half of security problems stem from the way the program is implemented, the other half are built into the design. The purpose of an architectural risk analysis is to make sure that, from a high level, the system is not designed in a manner that makes it inherently insecure. Design problems can be difficult or impossible to spot by looking at code. Instead, you need to examine the specification and design documents to find inconsistencies, bad assumptions, and other problems that could compromise security. For the most part, architectural risk analysis is a manual inspection process.

Architectural risk analysis is useful not only for identifying design-level defects, but also for identifying and prioritizing the kinds of issues that need to be considered during code review. A program that is secure in one context might not be secure in another, so establishing the correct context for code review is important. For example, a program that is acceptable for a normal user could be a major security problem if run with administrator privileges. If a review of the design indicates that the program requires special privileges to run, the code review can look for ways in which those special privileges might be abused or misappropriated.

In his book Software Security, McGraw lays out a set of seven touchpoints for integrating software security into software development [McGraw, 2006]. Code review with a tool is touchpoint number one. Michael Howard and Steve Lipner describe Microsoft’s security practices in their book The Security Development Lifecycle [Howard and Lipner, 2006]. Like McGraw, they advocate the use of tools for analyzing source code. Similarly, the CLASP Application Security Process calls for performing a source-level security review using automated analysis tools [CLASP, 2005]. No one claims that source code review is capable of identifying all problems, but the consensus is that source code review has a major part to play in any software security process.

1.5 Classifying Vulnerabilities

In the course of our work, we look at a lot of vulnerable code. It is impossible to study vulnerabilities for very long without beginning to pick out patterns and relationships between the different types of mistakes that programmers make. From a high level, we divide defects into two loose groups: generic and context specific.

A generic defect is a problem that can occur in almost any program written in the given language. A buffer overflow is an excellent example of a generic defect for C and C++ programs: A buffer overflow represents a security problem in almost any context, and many of the functions and code constructs that can lead to a buffer overflow are the same, regardless of the purpose of the program. (Chapters 6, “Buffer Overflow” and 7, “Bride of Buffer Overflow,” discuss buffer overflow defects in detail.)

Finding context-specific defects, on the other hand, requires a specific knowledge about the semantics of the program at hand. Imagine a program that handles credit card numbers. To comply with the Payment Card Industry (PCI) Data Protection Standard, a program should never display a complete credit card number back to the user. Because there are no standard functions or data structures for storing or presenting credit card data, every program has its own way of doing things. Therefore, finding a problem with the credit card handling requires understanding the meaning of the functions and data structures defined by the program.

secure programming code

“Secure Programming With Static Analysis” learn more

In addition to the amount of context required to identify a defect, many defects can be found only in a particular representation of the program. Figure 1.4 examines the matrix formed by defect type and defect visibility. High-level problems such as wholesale granting of trust are often visible only in the program’s design, while implementation errors such as omitting input validation can often be found only by examining the program’s source code. Object-oriented languages such as Java have large class libraries, which make it possible to more easily understand the design by examining the source code. Classes derived from a standard library carry significant semantics with them, but even in the best of cases, it is not easy (or desirable) to reverse-engineer the design from the implementation.

Security defects share enough common themes and patterns that it makes sense to define a nomenclature for describing them. People have been creating classification systems for security defects since at least the 1970s, but older classification efforts often fail to capture the salient -relationships we see today.

Over the last few years, we have seen a renewed interest in this area. The Common Weakness Enumeration (CWE) project (http://cve.mitre.org/cwe/) is building a formal list and a classification scheme for software weaknesses. The OWASP Honeycomb project (http://www.owasp.org/index.php/Category:OWASP_Honeycomb_Project) is using a community-based approach to define terms and relationships between security principles, threats, attacks, vulnerabilities, and countermeasures. We prefer a simple organization that gives us just enough vocabulary to talk to programmers about the kinds of coding errors that are likely to lead to security problems.

The Seven Pernicious Kingdoms

Throughout the book, we refer to the Seven Pernicious Kingdoms, a taxonomy created by Tsipenyuk, Chess, and McGraw [Tsipenyuk, Chess, McGraw, 2005]. The term kingdom is used as biologists use it in their taxonomy of living organisms: to indicate a high-level grouping of similar members. The Seven Pernicious Kingdoms are listed here:

1. Input Validation and Representation

2. API Abuse

3. Security Features

4. Time and State

5. Error Handling

6. Code Quality

7. Encapsulation

* Environment

(Note that there are actually eight kingdoms, with the eighth referring to the influence of outside factors, such as the environment, on the code.)

In our experience, this classification works well for describing both generic defects and context-specific defects. The ordering of kingdoms gives an estimate of their relative importance. McGraw discusses the Seven Pernicious Kingdoms in detail in Software Security [McGraw, 2006], and the complete taxonomy is available on the Web at http://vulncat.fortify.com; we include a brief overview here to lay out the terminology we use throughout the book.

1. Input Validation and RepresentationInput validation and representation problems are caused by metacharacters, alternate encodings, and numeric representations. Security problems result from trusting input. The issues include buffer overflow, cross-site scripting, SQL injection, and many others. Problems related to input validation and representation are the most prevalent and the most dangerous category of security defects in software today. As a consequence, Chapter 5, “Handling Input,” is dedicated solely to matters of handling input, and input validation and representation play a significant role in the discussion of buffer overflow (Chapters 6 and 7), the Web (Chapter 9), and XML and Web Services (Chapter 10, “XML and Web Services”).

2. API Abuse

secure programming code

“Secure Programming With Static Analysis” learn more

An API is a contract between a caller and a callee. The most common forms of API abuse are caused by the caller failing to honor its end of this contract. For example, if a program fails to call chdir() after calling chroot(), it violates the contract that specifies how to change the active root directory in a secure fashion. We discuss this and other APIs related to privilege management in Chapter 12, “Privileged Programs.” Another example of abuse is relying upon a DNS lookup function to return reliable identity information. In this case, the caller abuses the callee API by making an assumption about its behavior (that the return value can be used for authentication purposes). See Chapter 5 for more. The caller-callee contract can also be violated from the other side. For example, if a Java class extends -java.util.Random and returns nonrandom values, the contract is violated. (We discuss random numbers in Chapter 11, “Privacy and Secrets.”)

3. Security FeaturesEven though software security is much more than just security features, it’s important to get the security features right. Here we’re concerned with topics such as authentication, access control, confidentiality, cryptography, and privilege management. Hard-coding a database password in source code is an example of a security feature (authentication) gone wrong. We look at problems related to managing these kinds of passwords in Chapter 11. Leaking confidential data between system users is another example (also discussed in Chapter 11). The topic of writing privileged programs gets a chapter of its own (Chapter 12).

4. Time and StateTo maintain their sanity, programmers like to think of their code as being executed in an orderly, uninterrupted, and linear fashion. Multitasking operating systems running on multicore, multi-CPU, or distributed machines don’t play by these rules—they juggle multiple users and multiple threads of control. Defects rush to fill the gap between the programmer’s model of how a program executes and what happens in reality. These defects are caused by unexpected interactions between threads, processes, time, and data. These interactions happen through shared state: semaphores, variables, the file system, and anything that can store information. Massively multiplayer online role-playing games (MMORPGs) such as World of Warcraft often contain time and state vulnerabilities because they allow hundreds or thousands of distributed users to interact simultaneously [Hoglund and McGraw, 2007]. The lag time between an event and the bookkeeping for the event sometimes leaves room for cheaters to duplicate gold pieces, cheat death, or otherwise gain an unfair advantage. Time and state is a topic throughout the book. For example, Chapter 5 points out that interrupts are input too, and Chapter 11 looks at race conditions in Java Servlets.

5. Error HandlingErrors and error handling represent a class of API, but problems related to error handling are so common that they deserve a kingdom of their own. As with API abuse, there are two ways to introduce an error-related security vulnerability. The first (and most common) is to handle errors poorly or not at all. The second is to produce errors that either reveal too much or are difficult to handle safely. Chapter 8, “Errors and Exceptions,” focuses on the way error handling mishaps create ideal conditions for security problems.

6. Code QualityPoor code quality leads to unpredictable behavior. From a user’s perspective, this often manifests itself as poor usability. For an attacker, it provides an opportunity to stress the system in unexpected ways. -Dereferencing a null pointer or entering an infinite loop could enable a denial-of-service attack, but it could also create the conditions necessary for an attacker to take advantage of some poorly thought-out error handling code. Good software security and good code quality are inexorably intertwined.

7. EncapsulationEncapsulation is about drawing strong boundaries. In a Web browser, that might mean ensuring that your mobile code cannot be abused by other mobile code. On the server, it might mean differentiation between validated data and unvalidated data (see the discussion of trust boundaries in Chapter 5), between one user’s data and another’s (privacy, discussed in Chapter 11), or between data that users are allowed to see and data that they are not (privilege, discussed in Chapter 12).

* EnvironmentThis kingdom includes everything that is outside the source code but is still critical to the security of the product being created. Because the issues covered by this kingdom are not directly related to source code, we have separated it from the rest of the kingdoms. The configuration files that govern the program’s behavior and the compiler flags used to build the program are two examples of the environment influencing software security. Configuration comes up in our discussion of Web applications (Chapter 9) and Web Services (Chapter 10).

1.6 Secure Programming: Summary

Getting security right requires understanding what can go wrong. By looking at a multitude of past security problems, we know that small coding errors can have a big impact on security. Often these problems are not related to any security feature, and there is no way to solve them by adding or altering security features. Techniques such as defensive programming that are aimed at creating more reliable software don’t solve the security problem, and neither does more extensive software testing or penetration testing.

Achieving good software security requires taking security into account throughout the software development lifecycle. Different security methodologies emphasize different process steps, but all methodologies agree on one point: Developers need to examine source code to identify security-relevant defects. Static analysis can help identify problems that are visible in the code.

secure programming code

“Secure Programming With Static Analysis” learn more

Although just about any variety of mistake has the theoretical potential to cause a security problem, the kinds of errors that really do lead to security problems cluster around a small number of subjects. We refer to these subjects as the Seven Pernicious Kingdoms. We use terminology from the Seven Pernicious Kingdoms throughout the book to describe errors that lead to security problems.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles