This week, I conducted a quick, routine vulnerability assessment of a sample of our production environment. I wasn't pleased with the results.
Ongoing vulnerability assessments are critical in determining my company's security posture at any given time.
My methodology is fairly straightforward. I use a combination of three scanning tools: Internet Scanner from Internet Security Systems, Retina Network Scanner from eEye Digital Security Inc, and Nessus, a free network scanner available at www.nessus.org. With these powerful tools, I can get a comprehensive view of our IT infrastructure's vulnerabilities. I'm not suggesting that this method discovers 100 percent of all possible problems, but it gets close. And our other activities, such as application-level assessments, architecture reviews and code reviews, get us even closer.
All three tools are simple to use and can scan from a preconfigured list of IP addresses. For my assessment, I selected a sample from each functional area. For example, I picked Web servers, database servers, application servers, e-mail servers, Domain Name System servers, Lightweight Directory Access Protocol servers, domain controllers and a few firewalls, routers and network switches. I used the Windows Notepad applet to create a host file and entered about 40 IP addresses into the list.
We have more than 300 servers in our production environment, but I scanned only 40 of them. I don't have time to assess and review every server, and I shouldn't have to do so, since all servers for a given function are identically configured. Or so I thought.
To achieve consistency, server administrators are supposed to use a standard jump-start image and then run postinstall scripts that install additional software and make security modifications, depending on the server's function. Our administrators are supposed to maintain and use these images whenever they build a new system, so an assessment of one type of resource should yield the same result for every server.
My assessment included five Oracle database servers, three of which had serious security holes. For example, one server was running a vulnerable version of the Secure Shell (SSH) program. Our current baseline includes the most current version of SSH, so unless someone had downgraded the current version, that Oracle server hadn't been built using the standard baseline image. To my surprise, many other servers weren't built with the jump-start image either. Instead, administrators had built them using a full install of the Solaris 2.8 operating system. That includes more than 600 programs, of which we use only a small number.
Alarmed, I called a meeting of the Unix administration and security groups. It turned out that the Unix group had created a new jump-start image based on the full complement of the operating system because it was having problems with an application that needed some shared libraries that weren't part of the original, secure image.
Furthermore, the developers of the application in question weren't sure which libraries were needed for proper operation, so the Unix team had decided it would be easier to create a new image that included the full Solaris install.
But the problem went beyond just one program: The Unix group's manager cited examples of other applications that didn't work because the needed libraries weren't available. The jump-start process was clearly broken.
After some debate, we agreed on a procedure. First, we reviewed our baseline image. To come up with a new one, we printed out a list of all the application packages deployed during a full Solaris 2.8 installation. We then crossed out all the programs we knew weren't needed, such as support for unused hardware types and software used to play music or render graphics. Our seasoned Solaris administrators easily went through this list in less than an hour.
We then installed our Web and database server software to determine which packages and libraries they needed. Most of the servers in our production environment fall into three areas: Web, application and database. The Web and database environments are fairly static. We know what they need to function properly, but with the application servers, we never know what will be needed for them to function properly. This is what was causing our problems.
Fortunately, Solaris includes utilities that can trace each application's system calls and determine which software and libraries it needs. Usually, we start with the baseline image and run these utilities in a lab environment, but the Unix group hadn't done this because of a lack of time and resources. My security team asked a Unix administrator to write a script we could use to run these utilities for each application and to create a report of all the needed Solaris packages and libraries.
Now we have a static Web and database tier and an application tier that starts with a better baseline jump-start build. We can then add or remove any components based on each application's needs.
We hope we can live with this methodology until the security group can find the time to do some research on commercially available tools that might assist us in this area.