///Secure Programmer: Minimizing Privileges

Secure Programmer: Minimizing Privileges

Taking the fangs out of bugs

Secure programs must minimize privileges so that any bugs are less likely to be become security vulnerabilities. This article discusses how to minimize privileges by minimizing the privileged modules, the privileges granted, and the time the privileges are active. The article discusses not only some of the traditional UNIX-like mechanisms for privileges, but some of the newer mechanisms like the FreeBSD jail(), the Linux Security Modules (LSM) framework, and Security-Enhanced Linux (SELinux).

On March 3rd, 2003, Internet Security Systems warned of a serious vulnerability in Sendmail. All electronic mail is transferred using a mail transfer agent (MTA), and Sendmail is the most popular MTA, so this warning affected many organizations worldwide. The problem was that an e-mail message with a carefully-crafted "from," "to," or "cc" field could give the sender complete (root) control over any machine running Sendmail as it’s commonly configured. Even worse, typical firewalls would not protect interior machines from this attack.

The immediate cause of the vulnerability was that one of Sendmail’s security checks was flawed, permitting a buffer overflow. But a significant contributing factor is that Sendmail is often installed as a monolithic "setuid root" program, with complete control over the system it runs on. Thus, any flaw in Sendmail can give an attacker immediate control over the entire system.

Is this design necessary? No; a popular competing MTA is Wietse Venema’s Postfix. Postfix, like Sendmail, does a number of security checks, but Postfix is also designed as a set of modules that minimize privilege. As a result, Postfix is generally accepted as a more secure program than Sendmail. This article discusses how to minimize privileges, so you can apply the same ideas to your programs.

Basics of minimizing privileges

Real-world programs have bugs in them. It’s not what we want, but it’s certainly what we get. Complicated requirements, schedule pressure, and changing environments all conspire to make useful bugless programs unlikely. Even programs formally proved correct using sophisticated mathematical techniques can have bugs. Why? One reason is that proofs must make many assumptions, and usually some of those assumptions aren’t completely true. Most programs aren’t examined that rigorously anyway, for a variety of reasons. And even if there are no bugs today (unlikely), a maintenance change or a change in the environment may introduce a bug later on. So, to handle the real world, we have to somehow develop secure programs in spite of the bugs in our programs.

One of the most important ways to secure programs, in spite of these bugs, is to minimize privileges. A privilege is simply permission to do something that not everyone is allowed to do. On a UNIX-like system, having the privileges of the "root" user, of another user, or being a member of a group are some of the most common kinds of privileges. Some systems let you give privileges to read or write a specific file. But no matter what, to minimize privileges:

  • Give a privilege to only the parts of the program needing it
  • Grant only the specific privileges that part absolutely requires
  • Limit the time those privileges are active or can be activated to the absolute minimum

These are really goals, not hard absolutes. Your infrastructure (such as your operating system or virtual machine) may not make this easy to do precisely, or the effort to do it precisely may be so complicated that you’ll introduce more bugs trying to do it precisely. But the closer you get to these goals, the less likely it will be that bugs will cause a security problem. Even if a bug causes a security problem, the problems it causes are likely to be less severe. And if you can ensure that only a tiny part of the program has special privileges, you can spend a lot of extra time making sure that one part resists attacks. This idea isn’t new; the excellent 1975 paper by Saltzer and Schroeder discussing security principles specifically identifies minimizing privileges as a principle (see Resources). Some ideas, such as minimizing privileges, are timeless.

The next three sections discuss these goals in turn, including how to implement them on UNIX-like systems. After that we’ll discuss some of the special mechanisms available in FreeBSD and Linux, including a discussion about NSA’s Security-Enhanced Linux (SELinux).

Minimize privileged modules

As noted earlier, only the parts of the program that need a privilege should have the privilege. This means that when you’re designing your program, try to break the program into separate parts so that only small and independent parts require special privileges.

If different parts must run concurrently, use processes (not threads) on UNIX-like systems. Threads share their security privileges, and a malfunctioning thread can interfere with all the other threads in a process. Write the privileged parts as though the rest of the program was attacking it: it might, someday! Make sure that the privileged part only does as little as possible; limited functionality means there’s less to exploit.

One common approach is to create a command-line tool with special privileges (such as being setuid or setgid) that has an extremely limited function. The UNIX passwd command is an example; it’s a command-line tool with special privileges to change the password (setuid root), but the only thing it can do is change passwords. Various GUI tools can then ask passwd to do the actual changing. Where possible, try to avoid creating setuid or setgid programs at all, because it’s very difficult to make sure that you’re really protecting all inputs. Nevertheless, sometimes you need to create setuid/setgid programs, so when it’s necessary, make the program as small and as limited as possible.

There are many other approaches. For example, you could have a small "server" process that has special privileges; that server allows only certain requests, and only after verifying that the requester is allowed to make the request. Another common approach is to start a program with privileges, which then forks a second process that gives up all privileges and then does most of the work.

Be careful how these modules communicate with each other. On many UNIX-like systems, the command-line values and environment variable values can be viewed by other users, so they aren’t a good way to privately send data between processes. Pipes work well, but be careful to avoid deadlock (a simple request/response protocol, with flushing on both sides, works well).

Minimize privileges granted

Ensure that you only grant the privileges a program actually needs — and no more. The primary way that UNIX processes get privileges are the user and groups they can run as. Normally, processes run as the user and groups of their user, but a "setuid" or "setgid" program picks up the privileges of the user or group that owns the program.

Sadly, there are still developers on UNIX-like systems that reflexively give programs "setuid root" privileges. These developers think that they’ve made things "easy" for themselves, because now they don’t have to think hard about exactly what privileges their programs need. The problem is that, since these programs can do literally anything on most UNIX-like systems, any bugs can quickly become a security disaster.

Don’t give all possible privileges just because you need one simple task done. Instead, give programs only the privileges they need. If you can, run them as setgid not setuid — setgid gives fewer privileges. Create special users and groups (don’t use root), and use those for what you need. Make sure your executables are owned by root and only writeable by root, so others can’t change them. Set very restrictive file permissions — don’t let anyone read or write files unless absolutely necessary, and use those special users and groups. An example of all this might be the standard conventions for game "top ten" scores. Many programs are "setgid games" so that only the game programs can modify the "top ten" scores, and the files storing the scores are owned by the group games (and only writeable by that group). Even if an attacker broke into a game program, all he could do would be to change the score files. Game developers still need to write their programs to protect against malicious score files, however.

One useful tool — that unfortunately is a little hard to use — is the chroot() system call. This system call changes what the process views when it views the "root" of the filesystem. If you plan to use this — and it can be useful — be prepared to take time to use it well. The "new root" has to be carefully prepared, which is complicated because correct application depends on the specifics of the platform and of the application. You must be root to make the chroot() call, and you should quickly change to non-root (a root user can escape a chroot environment, so if it’s to be effective, you need to drop that privilege). And chroot doesn’t change the network access. This can be a useful system call, so it’s sometimes necessary to consider it, but be prepared for effort.

One often-forgotten tool is to limit resources, both for storage and for processes. This can be especially useful for limiting denial-of-service attacks:

  • For storage, you can set quotas (limits) for the amount of storage or the number of files, per user and per group, for every mounted filesystem. On GNU/Linux systems see quota(1), quotactl(2), and quotaon(8) for more about this, but although they’re not quite everywhere, quota systems are included in most UNIX-like systems. On GNU/Linux and many other systems, you can set "hard" limits (never to exceed) and "soft" limits (which can be temporarily exceeded).
  • For processes, you can set a number of limits such as the number of open files, number of processes, and so on. Such capabilities are actually part of standards (such as the Single UNIX Specification), so they’ve nearly ubiquitous on UNIX-like systems; for more information, see getrlimit(2), setrlimit(2), and getrusage(2), sysconf(3), and ulimit(1). Processes can never exceed the "current limit," but they can raise the current limits all the way up to the "upper limit." Unfortunately, there’s a weird terminology problem here that can trip you up. The "current limit" is also called the "soft" limit, and the upper limit is also called the "hard" limit. Thus, you have the bizarre situation that processes can never exceed the soft (current) limit of the process limits — while for quotas you can exceed the soft limits. I suggest using the terms "current limit" and "upper limit" for process limits (never using the terms "soft" and "hard") so there’s no confusion.

Minimize privileges’ time

Give privileges only when they’re needed — and not a moment longer.

Where possible, use whatever privileges you need immediately and then permanently give them up. Once they’re permanently given up, an attack later on can’t try to exploit those privileges in novel ways. For example, a program that needs a single root privilege may get started as root (say, by being setuid root) and then switch to running as a less-privileged user. This is the approach taken by many Internet servers (including the Apache Web server). UNIX-like systems don’t let just any program open up the TCP/IP ports 0 through 1023; you have to have root privileges. But most servers only need to open the port when they first start up, and after that they don’t need the privilege any more. One approach is to run as root, open the privileged port as soon as possible, and then permanently drop root privileges (including any privileged groups the process belongs to). Try to drop all other derived privileges too; for example, close files requiring special privileges to open as soon as you can.

If you can’t permanently give up the privilege, then you can at least temporarily drop the privilege as often as possible. This isn’t as good as permanently dropping the privilege, since if an attacker can take control of your program, the attacker can re-enable the privilege and exploit it. Still, it’s worth doing. Many attacks only work if they trick the privileged program into doing something unintended while its privileges are enabled (for example, by creating weird symbolic links and hard links). If the program doesn’t normally have its privileges enabled, it’s harder for an attacker to exploit the program.

Newer mechanisms

The principles we’ve discussed up to this point are actually true for just about any operating system, and the general mechanisms have been very similar between just about all UNIX-like systems since the 1970s. That doesn’t mean they’re useless; simplicity and the test of time have their own advantages. But some newer UNIX-like systems have added mechanisms to support least privilege that are worth knowing about. While it’s easy to find out about the time-tested mechanisms, information about the newer mechanisms isn’t as widely known. So, here I’ll discuss a few selected worthies: the FreeBSD jail(), the Linux Security Modules (LSM) framework, and Security-Enhanced Linux (SELinux).

FreeBSD jail()
The system call chroot() has a number of problems, as noted above. For example, it’s hard to use correctly, root users can still escape from it, and it doesn’t control network access at all. The FreeBSD developers decided to add a new system call to counteract these problems, named jail(). This call is similar to chroot(), but strives to be both easier to use and more effective. Inside a jail, all requests (even root’s) are limited to the jail, processes can only communicate with other processes in that jail, and the system blocks the typical ways root users try to escape from the jail. A jail is assigned a specific IP address, and can’t use any others as its own address.

The jail() call is unique to FreeBSD, which currently limits its utility. But, there’s a lot of cross-pollination between the various OSS/FS kernels. For example, a version of this jail has been developed for Linux using the Linux Security Framework. And FreeBSD 5 has added a flexible MAC framework (from the TrustedBSD project), including a module with functionality essentially like SELinux’s. So don’t be surprised to see more of this in the future.

Linux Security Modules (LSM)
At the 2001 Linux Kernel Summit, Linus Torvalds had a problem. Several different security projects, including the Security-Enhanced Linux (SELinux) project, had asked him to add their security approach to the Linux kernel. Problem was, these different approaches were often incompatible. Torvalds didn’t have an easy way to determine which was best, so instead he asked the projects to work together to create some sort of general security framework for Linux. That way, administrators could install whichever security approach they wanted on their particular system. After some discussion with Torvalds, Crispin Cowan formed a group to create a general security framework. This framework was named the Linux Security Modules (LSM) framework, and is now part of the standard Linux kernel (as of kernel version 2.6).

Conceptually, the LSM framework is very simple. The Linux kernel still does its normal security checks; for example, if you want to write to a file, you still need write permission to it. However, any time that the Linux kernel needs to decide if access should be granted, it also checks — asks a security module via a "hook" — to determine whether or not the action is okay. This way, an administrator can simply pick the security module he wants to use and insert it like any other Linux kernel module. From then on, that security module decides what’s allowed.

The LSM framework was designed to be so flexible that it can implement many different kinds of security policies. In fact, several different projects worked together to make sure that the LSM framework is sufficient for real work. For example, the LSM framework includes several calls when internal objects are created and deleted — not because those operations might get stopped, but so that the security module can keep track of critical data. Several different analysis tools have been used to make sure that the LSM framework didn’t miss any important hooks for its purposes. This project turned out to be harder than many imagined, and its success was hard-won.

The LSM made a fundamental design decision that’s worth understanding. Fundamentally, the LSM framework was intentionally designed so that almost all of its hooks would be restrictive, not authoritative. An authoritative hook makes the absolute final decision: if the hook says a request should be granted, then it’s granted no matter what. In contrast, a restrictive hook can only add additional restrictions; it can’t grant new permissions. In theory, if all LSM hooks were authoritative, the LSM framework would be more flexible. One hook, named capable(), is authoritative — but only because it it has to be to support normal POSIX capabilities. But making all the hooks authoritative would have involved many radical changes to the Linux kernel, and there was doubt that such changes would be accepted.

There were also many concerns that even the smallest bugs would be disastrous if most hooks were authoritative; while making the hooks restrictive meant that users would be unsurprised (no matter what, the original UNIX permissions would still normally work). So the LSM framework developers intentionally chose the restrictive approach, and most of its developers decided that they could work within the framework.

It’s important to understand some of the LSM framework’s other limitations, too. The LSM framework is designed to support only access control, not other security issues such as auditing. By themselves LSM modules can’t log all requests or their results, because they won’t see them all. Why? One reason is because the kernel might reject a request without even calling an LSM module; a problem if you wanted to audit the rejection. Also, due to concerns about performance, some proposed LSM hooks and data fields for networks were rejected for the mainline kernel. It’s possible to control some network accesses, but it’s not enough to support "labelled" network flows (where different packets have different security labels handled by the operating system). These are unfortunate limitations, and not fundamental to the general idea; hopefully the LSM framework will be extended someday to eliminate these limitations.

Still, even with these limitations, the LSM framework can be very useful for adding limits to privileges. Torvalds’ goals were essentially met by the LSM framework: "I’m not interested in the fight between different security people. I want the indirection that gets me out of that picture, and then the market can fight out which policy and implementation actually ends up getting used."

So, if you want to limit the privileges you give your programs on Linux, you could create your very own Linux security module. If you want to impose truly exotic limitations, that may be necessary — and the nice thing is that it’s possible. However, this isn’t trivial; no matter what, you’re still writing kernel code. If possible, you’re better off using one of the existing Linux security modules than trying to write your own. There are several LSM modules available, but one of the most mature of the Linux security modules is the Security-Enhanced Linux (SELinux) module, so let’s look at that.

History of Security-Enhanced Linux (SELinux)
A little history will help you understand Security-Enhanced Linux (SELinux) — and the history is interesting in its own right. The U.S. National Security Agency (NSA) has long been concerned about the limited security capabilities in most operating systems. After all, one of their jobs is to make sure that computers used by the U.S. Department of Defense are secure against determined attackers. The NSA found that most operating systems’ security mechanisms, including Windows and most UNIX and Linux systems, only implement "discretionary access control" (DAC) mechanisms. DAC mechanisms determine what a program can do based only on the identity of the user running the program and ownership of objects like files. The NSA considered this to be a serious problem, because by itself DAC is a poor defense against vulnerable or malicious programs. Instead, NSA has long wanted operating systems to also support "mandatory access control" (MAC) mechanisms.

MAC mechanisms make it possible for a system administrator to define a system-wide security policy, which could limit what programs can do based on other factors like the role of the user, the trustworthiness and expected use of the program, and the kind of data the program will use. A trivial example is that with MAC, users can’t easily turn "Secret" into "Unclassified" data. However, MAC can actually do much more than that.

The NSA has worked with operating system vendors over the years, but many of the vendors with the biggest markets haven’t been interested in incorporating MAC. Even the vendors who have incorporated MAC often do it as "separate products," not their normal product. Part of the problem was that old-style MAC just wasn’t flexible enough.

NSA’s research arm then worked to try to make MAC more flexible and easier to include in operating systems. They developed prototypes of their ideas using the Mach operating system, and later sponsored work extending the "Fluke" research operating system. However, it was hard to convince people that the ideas would work on "real" operating systems since all this work was based on tiny "toy" research projects. Few could even try out the prototypes, to see how well the ideas worked out with real applications. NSA couldn’t convince proprietary vendors to add these ideas, and NSA didn’t have the right to modify proprietary operating systems. This isn’t a new problem; years ago DARPA tried to force its operating system researchers to use the proprietary operating system Windows, but encountered many problems (see the Resources resources below).

So, NSA hit upon an idea that seems obvious in retrospect: take an open source operating system that’s not a toy, and implement their security ideas to show that (1) it can work and (2) exactly how it can work (by revealing the source code for all). They picked the market-leading open source kernel (Linux) and implemented their ideas in it as "security-enhanced Linux" (SELinux). Not surprisingly, using a real system (Linux) made the NSA researchers deal with problems they hadn’t had to deal with in toys. For example, on most Linux-based systems, almost everything is dynamically linked, so they had to do some subtle analysis about how programs are executed (see their documentation about the "entrypoint" and "execute" permissions for more information). This has been a far more successful approach; far more people are using SELinux than the previous prototypes.

How SELinux works
So how does SELinux work? SELinux’s approach is actually very general. Every critical kernel object, such as every filesystem object and every process, has a "security context" associated with them. The security context could be based on military security levels (like Unclassified, Secret, and Top secret), on a user role, on an application (so a Web server could have its own security context), or on many other things. A process’ security context can be changed when it executes another program. Indeed, a given program could run in a different security context depending on what program called it, even if the same user started the whole thing.

System administrators then create a "security policy" that specifies what privileges are granted for which security contexts. When a system call is made, SELinux checks if all of the necessary privileges are granted — and if not, it rejects the request.

For example, to create a file, the current process’ security context has to have the privileges "search" and "add_name" for the parent directory’s security context, and it needs the privilege "create" for the (to be created) file’s security context. Also, that file’s security context must be privileged to be "associated" with that filesystem (so for example, a "Top secret" file can’t be written to an "Unclassified" disk). There are also network access controls for sockets, network interfaces, hosts, and ports. If the security policy grants all of those permissions, then the request is allowed by SELinux. Otherwise, it’s forbidden. All of this checking would be slow if done naively, but numerous optimizations (based on years of research) make it extremely quick.

This checking is completely separate from the usual permission bits in UNIX-like systems; you have to have both the standard UNIX-like permissions and the SELinux permissions to do something on an SELinux system. But the SELinux checks can do many things that are hard to do with traditional UNIX-like permissions. With SELinux, you could easily make a Web server that could only run specific programs and could only write to files with specific security contexts. More interestingly, if an attacker breaks into the Web server and becomes root, the attacker won’t gain control over the whole system — given a good security policy.

And there’s the rub: to use SELinux effectively, you need to have a good security policy for SELinux to enforce. Most users will need a useful starting policy that they can easily tailor. I began experimenting with SELinux several years ago; at that time, the starting policies were rudimentary and had many problems. For example, I found in those early days that the early sample policy didn’t let the system update the hardware clock (I ended up submitting a patch to fix this). Devising good starting security policies is the kind of productizing that NSA is hoping the commercial world will do, and it looks like that’s coming to pass. Red Hat, some Debian developers, Gentoo, and others are using the basic SELinux framework and creating initial security policies so users can immediately start using it. Indeed, Red Hat plans to have SELinux enabled for all users in their Fedora core, with simple tools to allow non-experts to tailor their security policies by selecting a few common options. Gentoo has a bootable SELinux LiveCD. These groups should make it much easier to minimize program privileges without requiring a lot of coding.

Here’s where we come full circle. SELinux permits security transitions to occur only upon program execution, and it controls process’ permissions (not portions of a process). So to use SELinux to its full potential, you need to decompose your application into separate processes and programs, with only a few small privileged components — which is exactly how to develop secure programs without SELinux. Tools like SELinux give you finer control over the privileges granted, and thus create a stronger defense, but you still need to break your program into smaller components so those controls can be at their most effective.

Conclusions

Minimizing privileges is an important defense against a variety of security problems. Because bugs are inevitable, you want to make it much less likely that the bugs will cause security problems. But at least some part of a secure program has to have code involving security, so you can’t just minimize privileges and ignore everything else. Even after you’ve minimized the parts that involve security, those parts still have to be correct. And to be correct, you’ll need to avoid common mistakes.

We’ve already covered one common mistake, buffer overflows, in a previous column (see Resources for links to previous installments of Secure programmer). Another common mistake is to allow "race conditions," including problems in the often-misunderstood /tmp directory. My next installment will discuss race conditions, including why the /tmp directory is so often a problem and what researchers are doing to fix it.

Resources

2010-05-26T11:08:01+00:00 May 27th, 2005|Linux|0 Comments

About the Author:

David A. Wheeler is an expert in computer security and has long worked in improving development techniques for large and high-risk software systems. Mr. Wheeler is the author of the book "Secure Programming for Linux and Unix HOWTO" and is a validator for the Common Criteria. Mr. Wheeler also wrote the article "Why Open Source Software/Free Software? Look at the Numbers!" and the Springer-Verlag book Ada95: The Lovelace Tutorial, and is the co-author and lead editor of the IEEE book Software Inspection: An Industry Best Practice. This article presents the opinions of the author and does not necessarily represent the position of the Institute for Defense Analyses. You can contact David at dwheelerNOSPAM@dwheeler.com (after removing "NOSPAM").

Leave A Comment