Cloud providers and customers are ignoring the importance of securing online services against distributed denial of service (DDoS) attacks at their own peril.
Read more: Computerworld :: DDoS News
That’s the view of network security expert and Arbor Networks APAC solutions architect, Roland Dobbins, who argues that to date the availability aspect of information security has played a poor cousin to confidentiality and integrity.
“Almost all of the mindshare and almost all of the money spent on security focuses on confidentiality and integrity. That stuff is important, but it is also relatively easy,” he told Computerworld Australia ahead of his presentation at the 2011 Australian Network Operators Guild (AusNOG) meeting.
“Availability is difficult as it requires cross-functional knowledge of TCP/IP, of server operating behaviour, of services and application behaviour, and bringing all that together.”
Dobbins said that major security threat to availability — DDoS attacks — were particularly challenging, damaging and significant when it came to the increasing reliability of organisations on public Cloud services.
“When we look at the Cloud it is inherently a multi-tenant infrastructure, so an attack against a single customer is actually an attack against all customers in that given Cloud — or at least a significant proportion of those customers — as they are sharing a not only common network infrastructure but a common compute infrastructure, a common memory infrastructure, a common storage infrastructure, etc,” he said.
“The potential for collateral damage is extremely high and that is why DDoS attacks are the number one threat to the Cloud computing security model.”
In light of this Cloud providers needed to do more to ensure they had proper availability protection in place. Cloud service customers also needed to ensure they could assure themselves of the level of availability protection within a potential provider.
Dobbins said these checks should include assessing whether the correct network, application and services architectures were in place and designed to best current practices, or BCPs, so as to be resilient in the face of DDoS attacks.
Cloud operators also needed to be able to demonstrate their complete visibility into all network traffic ingressing, egressing and traversing their Cloud. This would enable the detection, classification and traceback of any DDoS attack.
Further, automated reaction mechanisms — allowing the provider to respond in an appropriate manner based on the type of attack, and to be able to discriminate between legitimate traffic and attack traffic — also needed to be present.
Such mechanisms, Dobbins said, included Source-based Remote Triggered Black Hole (SRTBH), the infrastructure-based technique ‘Flowspec’, and intelligent DDoS mitigation systems (IDMS) which can determine whether incoming traffic is a legitimate user or a botnet.
Dobbins added that often among potential Cloud customers, the security focus of request for proposal or contract documents was often tended to be on maintaining the confidentiality of data hosted in tenanted environments — often to the detriment of availability.
“What enterprises need to add to those RFPs is a section on availability and specifically protection of DDoS attacks — what types of mechanisms does the Cloud service provide have to detect, classify and trace back DDoS attacks and a detailed explanation of the capabilities the Cloud provider has and how it scales,” he said. “That way an enterprise can make an informed decision, along with all the other evaluation criteria, on whether a given Cloud provider has sufficient protection against DDoS attacks in place.
Follow Tim Lohman on Twitter: @Tlohman
Follow Computerworld Australia on Twitter: @ComputerworldAU