Even the best of us has bad days, but when Cisco has them for whatever reason, they get reported widely. Here are our picks of the top-7 bad luck happenings in Ciscoland in the past year, ranging from the departure from Cisco of a high-flying exec to a wireless LAN data flooding to some major problems with Cisco VoIP equipment.
No. 7: The departure of Mike Volpi
Mike Volpi, formerly head of Cisco's Routing and Service Provider Technology Group, surprised industry watchers when he resigned early in February. Widely tipped as the first heir apparent to CEO John Chambers, Volpi orchestrated many of the acquisitions that helped Cisco grow from a US$2 billion company to the US$30 billion behemoth it is today. Volpi also was responsible for developing switching products for data centers and distribution applications during his 13-year-career at Cisco. Several key products were under Volpi's management, including the Catalyst 6500 and 4000 series switches, VPN and security services, and content networking.
Looking at how Cisco is putting more emphasis on its consumer networking and social networking strategies today, we wonder whether Volpi would have been a good fit for Cisco now. With his departure, Cisco's clear hier-apparent is Chief Development Officer, Charlie Giancarlo, who last week was given the job of heading up the newCisco Development Organization, which basically oversees all technologies coming out of the company (and it's not clear how Giancarlo's new role will operate next to that of the newly-installed CTO, Padmasree Warrior). Also, the ambitious Volpi would have had to wait another five years before Chambers would vacate his position.
Volpi became CEO of online video company Joost Operations in June.
No. 6: Cisco.com's bad hair days
We've all suffered from the curse of technology but when you're the leading networking company and customers and partners can't access your Web site, it doesn't look too good, PR-wise. Cisco.com suffered two high-profile days of intermittent service this year (there may have been other days/hours when the Web site was inaccessible but we didn't notice).
The first blackout of the year, that we know of, happened on Aug. 8 when Cisco.com was inaccessible for almost 3 hours and service was spotty the rest of the day. Cisco did a good job of updating users of the problem throughout the day through its Platform blog. It issued its final statement at 10 p.m. PT, that day, blaming the problem on human error. It said: "The issue occurred during preventative maintenance of one of our data centers when a human error caused an electrical overload on the systems. This caused Cisco.com and other applications to go down. Because of the severity of the overload, the redundancy measures in some of the applications and power systems were impacted as well, though the system did shut down as designed to protect the people and the equipment. As a result, no data were lost and no one was injured."
However, the updates did prompt Cisco Subnet readers to comment with incredulous disbelief that Cisco didn't appear to have a proper disaster recovery plan in place.
The reason for the second site outage, which happened November 26, remains a mystery to this day. Earlier that day, readers began pointing out to us that Cisco.com had been inaccessible or really slow to download. Cisco issued a statement later that afternoon, confirming that "some issues" had "impacted access to certain applications on the site." The site remained spotty the rest of the day, with readers commenting on Cisco Subnet Community that they were still unable to access Cisco technical support documents or ordering tools. Cisco released a statement the next morning saying it had identified the cause but failed to reveal what that was.
Why was Cisco less than forthcoming with an explanation to its second outage when it had been pretty transparent during the first outage?