This week I was sidetracked from my projects yet again by the need to investigate two security incidents. Both involved deleted files on servers that apparently had been compromised.
The first incident was more of a server configuration issue than a traditional security incident, but it still warranted my attention. It started when a customer sent a message addressed to our abuse e-mail alias saying that he noticed several suspicious files on our public file transfer protocol (FTP) server. So I logged into the server as an anonymous user, and sure enough, several directories had been created and populated with 4GB of unauthorized MP3 music files.
Even more alarming, I found a file named Commands that contained account names and associated passwords for support Web sites we use and for accessing internal servers in my company.
The special accounts that give us access to technical support Web sites require user IDs and passwords for access. We pay tens of thousands of dollars per year for some of those accounts. It turns out that our product support group put the file there as a repository for what it considered shared, nonsensitive information. Later, it apparently became a repository for all sorts of information.
Unfortunately, we can't just make the anonymous FTP server go away. It's a valuable customer service, and our technical support team uses it extensively to offer patches and other support programs to our users. Our customers also use it to upload event logs and dump files for review by our technical support team. There are other methods for offering this type of service, but the anonymous FTP server has been the most effective and has been accepted by our staff and customers. So it's here to stay. But if it isn't configured properly, an anonymous FTP server can be easily abused and become a catalyst for legal, performance and security issues.
The MP3 files are one such legal issue, since they probably violate copyright laws and an investigation could embarrass our company. Performance would also suffer if the FTP server was saturated with such files, or if too many users accessed the system simultaneously to try to download the files. And we don't want third parties logging into our servers or support services.
Our FTP server had been configured to let anonymous users create their own directories, with no limits on the size of uploaded files. Simple upload quotas and directory permissions would have prevented this incident from happening. I'm going to add guidelines for configuring the anonymous FTP server to our already published secure baseline procedures.
The second incident involved malicious activity against a server in our certification lab. A systems engineer noticed that one of his source-code repository servers wasn't responding when he tried to access it using the Secure Shell program. To gain administrative access to the server, he had to use the server console. Then he noticed that several key directories had been deleted, which accounted for his initial inability to access the server. He then started poking around at the log files.
I'm sure that this engineer wanted to determine the extent of damage and identify the individual responsible. But there are problems in doing what he did. First, the incident occurred about a month ago, and the engineer is just now reporting it. Second, by accessing files and writing to various log files on the system, he made it difficult to distinguish between legitimate activity and hacker activity. In this case, it wouldn't have mattered, though, because the hacker deleted almost every log file on the system.
As a result of this incident, we will now use a secure baseline to ensure that all of our systems, even in labs, are configured securely. We will also temporarily configure an intrusion-detection system sensor to watch the traffic on the lab network segment. Perhaps the hacker will try to gain access again. I will also suggest that all logs be redirected to a central, secure server.
The problem with security incidents in a large, worldwide enterprise is that individuals don't know how to respond when they encounter hackerlike activity. They need to understand that certain actions must be taken immediately in the event of a suspected security breach that involves unauthorized server access. In some cases, for instance, the administrator should create a mirror image of the victimized system right away. That way, the evidence is preserved before the administrator makes the changes required to bring the system back online.
For example, the shell history file, which contains a record of keystrokes issued by the user, typically contains incriminating evidence of hacker activity. If the administrator has changed the system since the incident, it's difficult for the investigator to tell which activity can be attributed to the intruder.
To address this problem, I've decided to put together an incident reporting program. The first part of the project will involve creating documents to assist both Unix and Windows NT administrators in evaluating their systems when they suspect that there has been a compromise. I will also provide awareness-training documents focused on incident response.
Finally, I will ask the database team to create a database and an associated Web-based front end to facilitate the submission of incident reports. The results can then be e-mailed to my team, and we can take appropriate action.