Use of rpm and yum for application installation in large installation environment

Mike McManus asked:

Our very large organization has developed a standard for hosting of applications which dictates that the application and all components on which it relies must reside in a dedicated application volume distinct from the operating system itself. For instance, if the application is written in Perl, we require a separate instance of Perl to be maintained within the application volume.

The reasoning behind this is that those components which are relied on by both the OS and by an application can and often do have conflicting version requirements, and forcing the application to maintain its own resources makes it much easier to patch the OS. Also, it ensures that application data and logs don’t get stuffed into the locations where the OS-based tools are (this is particularly critical with respect to httpd, for example).

Furthermore, unless there is a valid and documented technical reason, the application processes must run as an unprivileged user identity and not as root. We have workarounds in place in Linux so that processes such as web servers can be run as an unprivileged user and accept connections forwarded from the privileged ports (80 and 443) to the unprivileged ports they are listening to.

For perspective, I’m a security professional in the Unix/Linux SA organization at my company and I work closely with the platform technical support specialists to maintain and enforce the standards I’ve laid out above. A large part of my responsibilities is vetting requests for privileged access via sudo, which are centrally managed. Our standard Linux is Red Hat, but Ubuntu and CentOS are also being considered for cloud environments.

The problem is that we are currently being bombarded with requests from application teams to permit them (via sudo) to install Linux applications with rpm and yum, as vendors require this and aren’t able to provide any alternative means to install the applications. This conflicts with our standards in multiple ways:

  • The rpm and yum tools must be run with root privileges. This means that everything they do operates as root, so the resulting installation must often be tweaked after the fact to allow it to run as an unprivileged user.

  • Packages often specify that the components must be installed in the root volume rather than under a specified volume. Where the root of the package tree can be specified, often vendors insist that it remain unchanged because they have only tested it in the precise environment specified in the packages.

  • Finally, rpm and yum pull in dependencies from any repository available to the system, and although we require applications to use our Satellite repository for anything available from Red Hat, oftentimes the vendors provide their own repos which must be included for the software to work.

My question is, how does one specify or restrict the use of rpm and yum in such an environment to ensure that package conflicts do not occur and system security patches can be safely applied, while not banning the use of these tools for application software altogether (which we have been doing until now and have discovered it is an exercise in futility)?

My answer:


Before we get into solutions, a few words about your company’s security standards. Put simply, they’re very difficult to work with, and so outdated as to be nearly irrelevant.

It’s obvious why they’re difficult to work with, so I won’t say any more about that.

As for being outdated, it’s clear that they do not take into account modern technologies such as virtualization, Linux capabilities, containers, SELinux, etc., all of which help to solve the same security problems in much more elegant and usable ways.

By way of example, binding httpd to a high port and then redirecting traffic to it with iptables, rather than simply letting it bind and then drop privileges, as it does by default, borders on paranoia and gains you virtually nothing. It also complicates using SELinux with httpd, since this sort of setup wasn’t envisioned with the design of the httpd SELinux policy.

In the same way, just blindly requiring packages to stuff themselves into /opt or /usr/local gains you nothing, as RPM already maintains the separation you require regardless of where packages are installed (unless the package is broken, which may be the case with third party vendor packages, but such would refuse to install) and loses standards compliance, possibly making relevant SELinux policies unusable, and creating a maintenance nightmare. Red Hat Software Collections is designed along these lines, and while it has some usability issues, building your own packages by this design could be a stop-gap measure while you work on the real issues.

The biggest problem I see, though, is maintaining a “big iron” sort of server, or servers, on which everyone’s applications run side-by-side. This alone introduces its own security issues, which is probably the origin of these “security practices.” Virtualization is quite mature at this point and simply separating applications into their own VMs, e.g. with KVM on RHEL 6 or RHEL 7, will eliminate the need for the majority of these “security practices.”

Along those lines, since you almost certainly have a very large number of applications, creating a private cloud with OpenStack is probably going to be your best bet in the short to medium term. These would use RHEL 7 hosts and run RHEL 7, 6 and maybe even 5 guests as you probably have a bunch of those still alive and kicking. It would also give you a platform to experiment safely with new applications and operating systems, as well as allocate resources more easily by business unit, department, etc.

If virtualization is too heavyweight for some things, then move to containers (e.g. LXC/Docker on RHEL 7). These are much lighter weight and can be stripped to virtually nothing but the application package itself, and then isolated with their own filesystem, network and uid/gid namespaces, effectively cutting them off from any other container except via whatever you happen to open in the respective firewalls. Adding SELinux to either KVM virtual machines or Linux containers provides a second layer of protection, and can be turned on with about one click.

Plus, your company is full of developers who will love you forever if you start offering them OpenStack and/or Docker.

In short, it’s time to evaluate modern Linux distributions and the capabilities they provide, and reassess all of the security practices in light of those capabilities.

With respect to licensing, Red Hat now offers unlimited virtualization licenses, allowing you to run unlimited RHEL VMs/containers, and of course there’s also CentOS which will drop-in replace RHEL about 99.9% of the time. So there’s no excuse there.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.