Originally published April 5, 2002 at InformIT.com

I recently received a warning, reporting that a popular program has yet another security problem because of a buffer overflow. I had to laugh, thinking back to Microsoft CEO Steve Ballmer’s retort, “You would think we could figure out how to fix buffer overflows by now.”

Criticism of these software problems and recent acts of terrorism has many organizations, including Microsoft, rethinking security and software development as part of the process—rather than an afterthought. Unfortunately, in organizations that do not have a good record of keeping discipline among the programming staff, adding such protective measures may not be easy.

The popular image of the programmer is the nerdy-type person with high-caffeine soda cans piled in the cubicle, churning out code that dazzles everyone. Standards? The programmer knows better than those out-of-date standards committees. Documentation? Programmers do not do documentation; programmers write code. Tell him what you want and the programmer will magically make it happen, even if he has to stay up all night, every night, for the next three weeks.

These attitudes are difficult to change, but not impossible. However, as the manager or director of a programming shop, you need to add information assurance to the programming task. Consider developing policies and implementing them in increments, using my top four rules for secure development (described shortly).

Developing the Policy

Before you implement any procedures, you need to have your policies in place in order to have a goal to reach. Policies are not guidelines or standards, nor are they procedures or controls. Policies describe security in general terms, not specifics. Procedures are implementation details.

When you write policies, keep them general, so that the procedure can change should it fail. For example, there are many wonderful programs that help find potential buffer overflow problems or memory leaks. The policy should request that developers use one of these tools. But if you specify a brand or version, you will have to update the policy to remain compliant. Then the policy will have to undergo a review before being updated. Instead, let the brand and version be part of the procedures. Procedures have a different review cycle, usually shorter than for policies, which allows them to be updated as necessary.

Software development policies usually have three major components:

  • Software development process
  • Testing and documentation
  • Revision control and configuration management

Although there are other concerns, such as third-party development, it is not necessary to specify these areas when working with developers. For them, we concentrate on the software development process.

Secure Development Rule #1: No Buffer Overflows

Recently, a friend (call him Mike) who manages several programming projects heard that his company’s proprietary software failed for a client because of a buffer overflow condition. The failure caused problems that “should never happen,” according to his programmers. When the problem was finally diagnosed—after three days of downtime—Mike issued a policy stating that developers will not be allowed to use functions that copy memory unbounded. First, since most of the custom programming was being written in C and C++, he ordered that functions like strcpy

would no longer be used. In fact, he ordered all development to stop and the programmers to go through every program to find all unbounded memory copies and fix them. When they completed the examination of nearly one million lines of code, they made over 1,000 changes. Over 1,000 possibilities for a buffer overflow condition!

After being impressed by the exercise, I realized that this could not be accomplished without proper configuration management (CM). I asked Mike about his CM policies. He turned his chair to the nearby bookcase and handed me a binder. The front of the binder was three pages that described the full CM policies. It was replete with rules on management of software, system configurations, and responsibilities of all administrators and developers. The other nearly 300 pages described the CM procedures in graphic detail.

Your organization may not need that many pages to describe your CM process, but having one always is proven to be beneficial.

Secure Development Rule #2: Check Status Returns

The two-week buffer overflow exercise just described allowed the developers and managers to examine their software and make another discovery. Throughout the code, developers found that the returns from a number of functions and system calls were not checked. In addition, one of the programmers found that a function not only had potential buffer overflow problems, but the failure to check the return from system calls may be the cause of a longstanding bug in their software that caused database connections to fail if the user was not logged in as the administrator. The workaround for the bug was to log in as the administrator before executing this function. Security managers and database administrators did not like this requirement, but allowed it for the sake of production. The programmer who did the analysis reprogrammed the function to check status returns and add proper error handling. When finished, the three-year old bug was no longer a problem.

In this case, the problem was writing to an open network connection where the database was on the local machine. According to many programmers, it is one of those “this cannot fail” system calls. In fact, by checking the return codes it was discovered that failure conditions could arise that included operating system bugs. Mike changed his company’s software development policy to say that all system and function calls must be checked for success or failure, and then he ordered the programmers to go through the software and check the return from every system call and function call.

Before you ask, this company told its customers that they were delaying their next promised major release of the software. Instead, they promised significant bug fixes to the existing software. Many of the customers were happy to wait and see what was delivered. With that, Mike set aside four weeks to allow the programmers to go through the software and fix these problems.

Not everything went as planned. Six weeks later, these programmers had fixed hundreds of cases where system calls and functions were not checked. They added many lines of code for new error-handling functions. In the end, they fixed dozens of outstanding bugs and closed many security holes, including the removal of several backdoors left by previous employees. They also removed all of the requirements for programs to be run as privileged users.

Secure Development Rule #3: Use the Security Features of the System

After their successes with fixing buffer overflows and solving a lot of problems by checking the results of system calls and functions, Mike’s company started to consider the features they wanted to add to the next release. They were brought back to reality when two very significant customers called and asked why they did not fix two other longstanding access control problems.

After receiving this report from customer service, Mike followed up with personal visits to the customers, where he discovered that the software tried to do its own access controls and ignored security features offered by the operating system. Even though the documentation had explained how to secure that data, the operators were expecting it to follow the same rules as configured in the operating system.

A review of the security features of their software showed that custom controls duplicated controls offered by the supported operating system. In some cases, the review showed that the custom controls did not work as well as those offered by the operating system.

When he heard this, Mike went to his three pages of policy to see whether they were required to use the security features of the operating system over custom controls. To his surprise, this was not a policy. Since he had control over this policy, he added it and had his managers start another review. This time, the review was focused to areas where the security features of the software tried to do the same functions as the operating system. With a select group of six programmers, the three-month review removed a number of security features of the program and placed the reliance on the underlying operating system. Aside from reducing the amount of code that had to be supported, administration of the software became easier when controls were set for the entire system and not for every function. This simplicity also removed a lot of documentation for security features that were no longer required.

Of all the changes made during these reviews, these security changes had the biggest impact. Customers reported that administrators found it easier to set the operating system controls to ensure access. One customer also found that the new paradigm allowed them to track who had seen some documents through the auditing features of the operating system. Mike’s company was surprised because this was never requested as a feature by any of their customers. Now, more than half do this type of tracking.

Secure Development Rule #4: Code Reviews

Over the six months of reviews, rewrites, and fixes, Mike’s company learned that among other quality-assurance measures, doing code reviews provided a way to prevent some of these problems. As he was glowing over the concept, I reminded him that code reviews were always part of the government’s review of software used in a national security environment.

In the succeeding months, Mike wrestled with creating the procedures to do code reviews. We had several discussions that ranged from the ideal, academic theories to what we have seen as industry standards, which we both agreed were lacking. In fact, the only experience with code reviews between us was one project I was involved with a few years prior for the U.S. Department of Defense.

After a lot of research, we outlined what made their recent reviews a success. First, we realized that the reviewers had never been involved with the development of that part of the product. For example, the server source code was reviewed by a group of programmers who wrote the desktop applications, while the source code for the desktop applications was reviewed by web programmers. Although all the programmers were involved with developing the overall product, we found that it did not taint their objectivity, while taking advantage of their experience. These reviewers were able to bring a new perspective to the programs they reviewed.

One problem with this exercise was that after devising a plan, Mike discovered that his company did not have the resources to do this type of code review on a regular basis. During the previous review, his resources were focused on these reviews and fixing the problems. When it came to reviewing the security mechanisms used by their software, this review was limited in both scope and resources while the rest of the development staff worked on new releases.

My first three rules are items that can be enforced during the software development cycle. Programmers can be trained, even cajoled into following these guidelines. Software tools exist to help find bad programming practices and to enforce the rules. But code reviews are usually not part of the software development cycle. They require additional resources, additional planning, additional monies, and additional time to market, and both money and time affect the amount of resources available for this effort.

Market and resource concerns overshadowed Mike’s effort. However, he did have the recent successes to use as his support. After some negotiations among the internal customers that would be affected by this decision, including marketing, a plan was created to move the code reviews from the development staff to the quality assurance (QA) environment. Making this process part of QA ensures independence of the process, which is something that the marketing people can use to show the company’s advantage over their competitors. When the software is in the QA cycle, it is removed from the development environment and can be reviewed without the requirement to hurry up and finish so that the next great feature can be programmed. It also puts the control over resources in the QA department.

One caveat to doing this is that your organization has to fully support a complete QA department. The QA department can borrow resources from other parts of the organization, but it needs to be empowered to hire its own staff, including programmers who can perform this review. If your organization is small, you may not have the resources to support this type of QA. This is not a problem if your organization has QA procedures in place. However, even without a large organization, the smaller group can include code review as part of its testing and QA cycles as part of the overall plan.

Conclusion

What Mike’s company did was a little extreme. Most organizations cannot take three to six months to review all their software and keep customers satisfied. Others lack the configuration management procedures to support this task. Whatever the reason, you should look at your organization’s software development infrastructure and take the steps to improve the programming process.

If the process does not have the management controls and procedures to support these efforts, start there. Having the support of management is important to communicating new procedures and showing commitment. Once that is completed, the development environment can be updated, enforcing the four steps I discussed in this article.

However you go about implementing these changes, remember that security is a process. This process has to be supported in all areas of the organization. Software development is no different. By starting with simple steps, you can help mitigate a good portion of the potential for security violations that stem from problems in software.

Pin It on Pinterest

%d bloggers like this: