Splunk Inc.'s Splunk Data Center Search Party

Toop uses search technology to speed data center problem resolution

Many data center problems are easy to solve once you know what's going on. The hard part is finding them in the gigabytes of data dutifully logged on a millisecond basis by all the hardware, databases and applications. Manually combing through all the tiers of log data to track down a transaction or problem is slow and expensive. This is where Splunk comes in, a tool that uses search technology to speed problem resolution.

"Companies have had this fire hose of data thrown at them," says Dana Gardner, an analyst at Interarbor Solutions. "Splunk whittles down this stream so they can exploit the data."

San Francisco-based Splunk was founded in 2003 by three friends -- Michael Baum, Erik Swan and Rob Das -- who were running large-scale infrastructures dealing with search technology. CEO Michael Baum, for example, was running Yahoo's e-commerce applications on more than 12,000 servers. As they discussed their jobs, they found that they were spending a lot of time and resources weeding through log file data with primitive tools. That kicked off a process that eventually led to Splunk.

Initially, they planned to add something to the hardware or application layers that would help system components talk to one another. This, however, would add to the system overhead, so they decided a better approach was to use search technology to give administrators easy access to the data that was already available.

"That's when it really got hard," says Baum. Although the developers had built search technology for companies like Yahoo and Infoseek, Web pages were a lot easier to index than the wide variety of data formats used for data logs.

Then there was the matter of establishing links between the different types of unstructured data. In Web search, the hyperlinks already existed, but not in the data center. So Splunk had to be able to not only access and index all the data in real time, but also establish relevant connections.

"It took us quite a bit longer to develop the technology than we anticipated," Baum says.

Another challenge was to have the index updated in real time. After two years of development, a beta version was released. Further refinement based on user feedback led to Splunk's 1.0 release in December 2005.

Splunk indexes events by time, terms and relationships, and discovers relationships between different kinds of events. Rather than having to go in and look at individual log files, administrators can go into the Web interface and perform a keyword search to find the relevant information in any log file.

They can also search by time or browse event relationships. The index is constantly updated so that an event will show up in a search within seconds of occurring.

Jasmine Noel, an analyst at Ptak, Noel & Associates in New York, says companies with large, complex infrastructures will get the most benefit from using Splunk.

"Today, Splunk's sweet spot is knowledgeable IT experts who have a good idea of what they are looking for but are having difficulty finding it in the haystack of error logs and application dumps from a myriad of different servers," she says.

Like Google, "it automatically indexes everything, but its true power is unleashed when an experienced searcher is looking for something specific," says Noel.

Splunk is available either as a free download, called Splunk Server, or on an annual subscription basis for the full-featured Splunk Professional edition. Pricing ranges from US$2,500 for a daily data volume of 500MB to $10,000 for 10GB.

Join the newsletter!

Error: Please check your email address.

More about Dana AustraliaDASGoogleInfoseekSpeedSWANYahoo

Show Comments