Out with the old, in with the new

OUR PROBLEM SCENARIO: The IT architecture at a major consumer goods manufacturer was a mess. Rapid growth over the last several years had forced a thinly spread corporate IT organization into reaction mode. What application and infrastructure guidelines the IT group had in place were often ignored by IT managers throughout the decentralized organization as they made decisions on the fly to suit the needs of their particular fiefdoms. In one year, the number of servers alone had doubled, with no consistency whatsoever on operating system selection.

Different Windows versions ran on 60 percent of the servers, Linux on 25 percent and Unix on 15 percent. Some users were beginning to complain that applications were running too slowly or crashing completely, while others expressed irritation about the growing number of passwords they had to remember. Here's a closer look at the IT environment:

Applications: The company's most critical applications are legacy mainframe-based used by employees and customers for number crunching, as well as a handful of homegrown accounting applications that run on very old versions of Windows. These latter applications work well, so the priority has never been to update or port them to new versions of Windows. Besides these, the company has a typical assortment of business applications, from ERP and CRM to e-mail and corporate instant messaging. Some IT managers had favoured Java, others Microsoft's platform. Some developers have started playing around with Web services.

Server infrastructure:NoName has a mish-mash of Sun and HP Unix servers and an assortment of Wintel servers at four data centres - one in New York, which is mirrored in Boston, and others in London and Sydney.

The number of servers has doubled to nearly 450, with roughly one-third of the older Wintel servers reaching five years of age and in need of being refreshed. The remainder have not yet hit the corporate five-year depreciation threshold.

Network infrastructure: NoName maintains a sluggish 1Gbit/sec Ethernet backbone among its New York headquarters and seven major offices around the country; desktop links operate at 100Mbit/sec. An IPSec VPN provides connectivity from smaller offices and international facilities. Wireless LANs are popping up at the offices, but have not been sanctioned by corporate IT.

Storage infrastructure: Rapid growth has led to a hodgepodge of server-attached storage arrays of varying capacity, with a Fibre Channel storage-area network (SAN) in the New York data centre.

Corporate IT knew it needed to turn IT into a services organization capable of enabling the business. It knew changes - big changes - were in order if it was to make that happen. But where to start?

Solution: Plan hard to play hard

Expert: David Blumanis, data centre infrastructure adviser, APC, Asia Pacific

The upshot: There's an eerie familiarity between the case of NoName Enterprises and some real-life organizations where I have worked. To figure out where to start when transforming an IT organization, it is tempting to talk about technology vision and trends, but this is not the first step.

In IT, as in any change management project, success is all about strategic planning, engaging management, understanding business objectives, motivating staff and ongoing communication.

Underpinning this, there are some fundamental IT principles - such as establishing standards - that will sustain an IT organization that truly works as a business enabler.

Phase 1- information gathering and evaluation

The process begins by reviewing all areas of the business and how they use IT to improve operations. This includes meeting with relevant employees, listening to their views and analysing the findings.

Armed with this information, it is time to start regular engagement with the company's executive team.

Initially, this involves presenting a high-level assessment and some preliminary ideas on the strategy needed to fix the situation. Importantly, this includes:

  • timelines (allowing roughly six months to develop an agreed strategy)
  • an outline of the funding required, and
  • expectation setting: how long current performance levels will need to be endured - change takes time.

This is the time to ask the company's leaders where they see the business going over the next 10 years, because there's no use putting in place a strategy if it's going to be blown away by something happening in the not-too-distant future. It might seem common sense, but it's worth asking these questions at the outset.

Phase 2 - refining strategy and preparing detailed plans

Having gathered executive team support, it's time to bring together all the IT managers in strategic workshops. To gain their support, it is helpful to demonstrate why there is a need for change and how the team can work together to achieve it.

These sessions are also a forum to communicate the senior executive's business vision and cross-map it to the initial high-level IT strategy ideas.

What are these high-level IT strategy ideas? Based on NoName's situation, the obvious areas to start are:

  • adopting organization-wide standards;
  • a common architecture; and
  • one, united IT team working to set guidelines.
This is the time to instil discipline and empower managers. Announce that the IT managers now form part of a core team responsible for facilitating change across the IT environment. In short, they can either come on the bus for the ride and influence positive outcomes or get off the bus, because change is the organization's mandate.

As a result of the strategy workshops, allocate key areas of ownership to individual managers; these include:

  • network architecture and standards review;
  • server and data architecture and standards review; and
  • business applications and databases futures to support the business into the future.
Each manager is advised to gather vendor submissions from incumbent suppliers and other industry leaders. It is important that all internal technical staff are involved in reviews. Reinforcing management of vendors at this stage is critical to get what is best for the business and not what is best for the vendor.

The expected output from these reviews is a list of recommendations, defining any options for new architectures and standards, a high-level transition plan, business risk impact of each option and the potential associated costs.

Each of the managers can be asked to develop recommendations relating to their area of responsibility and present the findings back to the IT change management team. These presentations enable the team to perform a peer cross-check and develop recommendations for business and IT strategy that incorporates a three- to five-year plan for each option.

From this, it is possible to evaluate a series of options - from fast and expensive to slow and least expensive, with options in between - and all with varying business impacts.

These plans are presented to the executive team in a workshop environment to seek its buy-in and approval on the preferred strategy. This is a business decision and not IT's alone - one of the areas most IT managers today fail to realize - if the IT organization is to be an enabler of the business.

Phase 3 - implementation and monitoring

With the strategy agreed, it is time to manage implementation, ensuring regular updates with the executive team keep everyone informed at all times.

Often during implementation, the management team needs to make hard decisions. These include organizational restructures, and in some instances staff changes, to support the strategy.

Budgets are set, formal RFQ/P processes are engaged (now that IT is clear on what the company needs), and the IT change program is communicated to the entire organization.

Ongoing communication is crucial, this includes meetings with local IT teams to ensure that every individual understands the important role he or she plays in the plan, as well as the technical detail.

In IT, like most forms of management, it is relevant to draw analogies from how sporting teams undergo change. A new coach and players come together. They plan, they put in place a strategy and they train hard for a few years. Eventually the hard work brings success on the scoreboard and everyone forgets the pain and errors of the past.

l David Blumanis is a data centre consultant for APC Asia-Pacific and has some 24 years experience in the IT industry. Previously, he managed some of the nation's largest IT operations such as Telstra, Unisys and Vodafone

Solution: Outsourcing looks good

The expert: Jeff Kaplan, managing director, Thinkstrategies

The upshot: Corporate IT must take greater control of day-to-day operations, then consider outsourcing options for getting its infrastructure in new data centre shape.

Obviously, this major consumer goods manufacturer's current ad hoc and decentralized IT approach has failed to support the company's corporate objectives adequately and has led to a severe deterioration in the reliability of its IT infrastructure and application services. As a result, IT leadership must take greater control of day-to-day IT operations end to end, and create a common vision for IT's overall role within the company.

IT leadership's first step must be to establish a stricter set of corporate IT priorities, policies and procedures for governing operations. This means that many of the company's IT decisions must be more directly based on corporate objectives. It also means that IT decisions must become more centralized to ensure better coordination and greater cost-savings.

Centralization might not sit well with IT staffers who previously have been allowed plenty of freedom to make their own decisions and to operate independently or with business units that had been making IT decisions autonomously. Given potential political ramifications, the move to a more centralized operating model should be mandated and fully supported by the company's senior management, starting with the CEO and CFO.

Start with an audit

With its new authority, IT leadership must next initiate a thorough audit of IT and application service levels and assessment of current and future business requirements. To ensure an objective assessment, this audit should be either conducted by an independent firm or by an internal team of IT and business representatives that report its findings to senior management.

The audit should target specific performance problems that are hampering business success today and those that could adversely affect the company soon. It should determine which problems are directly related to technology issues vs those that might be a result of poor IT management practices. Given the escalating impact of the IT operating problems, the company needs to make important changes quickly. This audit process should take into account typical business cycles, but should not take more than 90 days.

Move on to outsourcing strategy

Given the company's limited resources, IT leadership then should develop an outsourcing strategy based on the specific priorities resulting from this audit and assessment. An outsourcing strategy should determine what roles outside solution providers will play in resolving the current problems, building an IT infrastructure and deploying applications that best satisfy the company's current business needs and meet its future corporate objectives. IT leadership needs to do this with the understanding that revamping the IT architecture and applications entirely on its own would not make good business sense given the rapidly expanding array of outsourcing or "out-tasking" alternatives.

While I don't recommend a wholesale transfer of the company's IT operations to an outsourcer because most of these deals fail, a growing number of managed services are available for addressing many of the company's problems. For instance, a managed VPN service could end sluggish performance on the 1Gbit/sec Ethernet backbone, and a managed storage service could satisfy the company's storage-area network needs and provide off-site back-up facilities for disaster recovery and business continuity.

Consider specific managed services

As they've matured, managed services have become beneficial for large-scale companies that want to offload specific IT functions. Independent managed service providers as well as a growing variety of hardware and software vendors, telecomms carriers and resellers offer these services.

The rapid evolution of managed services is being matched by a resurgence of hosted software services. The success of Salesforce.com among small-to-midsize businesses has attracted attention from larger companies that are fed up with traditional CRM and salesforce automation software packages. Such on-demand services are available not only from major enterprise players such as Siebel Systems and Oracle, but also from other 'Net-native software service providers such as NetSuite. This consumer goods manufacturer might well be able to take advantage of a managed supply-chain management service.

Standardized platforms

Whether the company updates its hardware and software on its own or leverages third-party resources, standardizing the hardware and software platforms should be a priority.

This not only should permit the company to achieve greater interoperability across geographies, but also should increase system and application reliability, and reduce management and maintenance costs. Standardization also would permit the consolidation of systems and platforms, which could result in greater performance levels. Standardization could enable the company to establish strategic sourcing agreements with key vendors, reducing procurement and support costs.

Finally, the company must remember that we are still in the midst of a buyer's market. In this environment, it has the luxury of selecting from a wide range of product and service alternatives. It also has the opportunity to negotiate favorable prices for these alternatives. The company should not make its choices based on price alone. But, it should be able to find good, economical solutions that address its short-term needs and long-term strategic objectives.

Solution: The mindset matters

The expert: David Nolan, senior VP of professional services and network solutions, Forsythe Technology

The upshot: This company needs a new mindset before it can venture into new data centre planning.

This organization decentralized its IT support for valid reasons - to respond quickly to acquisitions and other forms of growth, and presumably for better cost-justification and containment of IT spending as well as to align IT resources with business unit needs.

However, its decentralized approach has had costly long-term results. Lack of standards, availability problems, and security and compliance challenges have led to decreased productivity, increased risk of business interruption, and - in all likelihood - unnecessary spending.

The first challenge in cases such as this is not the choice of the right technology, but finding a more effective IT management mindset. As Einstein is reported to have said, "A problem cannot be solved by the same thinking that created it." Corporate IT groups can suffer as easily as siloed groups from an "incrementalist" mindset, especially when confronted with the enticements of emerging technologies.

Grouping the problems

The first step toward adopting a more effective mindset, and thus reducing cost and risk, is to examine how problems relate to one another, and what integrated set of solutions best addresses each group of issues. Based on many conversations with organizations like this fictitious manufacturer, we have found a range of IT issues that fall within six basic solution set areas.

  • IT portfolio management: the ability to tackle standardization issues of the sort described in the example requires knowing what you have. Inventory auditing and asset tracking provide that information. For any organization, but especially one with older, less reliable equipment, streamlined maintenance contract management can help minimize business interruption by assuring that equipment failures will be repaired within required timeframes. Together, these services - along with software licence management - constitute overall IT portfolio management.
  • The information they provide facilitates easier short- and long-term management, as well as better ROI measurement. This enables more strategic investments. Given the challenges facing this organization, one would have to ask: do the IT executives really know what they have? Do they really know how individual technology assets support specific business functions?

  • Server optimization: technology and platform fashions aside, server, storage and network concerns come down to performance, availability, interoperability, manageability and budget. Recent proliferation, concerns about ageing equipment and a "mish-mash" of technologies can be red flags that the time has come to assess the IT environment. However, effective server optimization methodology requires stepping back to ask questions such as: how do we ensure that we meet our service-level agreements to our customers? Can we find a solution that is better, faster, less expensive and more secure?
  • Storage optimization: The same is true for storage and networks. In examining the need for storage optimization, deciding whether virtual storage (or a storage-area network) is the appropriate technology solution is secondary to determining if and how the current state of storage is affecting business performance, recoverability and compliance-readiness. Critical questions include: are our backups done properly? Can we really recover? Do we treat all data the same even though some data is far more valuable than other data? Do we know what data we have and where it is?
  • Network optimization: Network infrastructure is most often the gating factor to overall application performance and availability. And "new data centre" technologies such as IP communications, optical and wireless offer tremendous promise. However, organizations can run into trouble putting the cart before the horse in terms of when and why they implement the new technologies. Network optimization requires asking questions such as: what connectivity standards do we need to support the performance requirements of our different business units and locations? Could network convergence help us cut our communication costs? Is our network ready to support our upcoming IP telephony initiative? Thinking about networks also must lead to a consideration of security - though security goes well beyond the network.
  • IT risk management: compliance, security and business continuity and disaster recovery concerns together constitute IT risk management. The biggest mistake many companies make is to defer risk management initiatives until after they've "covered the basics" with regard to infrastructure. This is more risky and costly than an integrated approach. IT risk management is not a technology; it is the way a company builds and manages its enterprise and its processes to handle varied risk factors, from security threats and vulnerabilities to compliance audits to knowing the answers to questions such as: what would happen to our business operations, and bottom line, if the candle factory next door caught fire? What damage could a savvy, ill-intentioned hacker do?
  • Sourcing: to execute effectively based on the new mindset, our hypothetical organization also might want to look at sourcing options, asking the questions: how do we find the resources to manage and execute all of the initiatives required to fix our major problems and turn IT into a true services organization capable of enabling the business? How do we know our IT team is looking at the big picture? The first question here is not "Do we insource or outsource?" but "What resources will this require?"

Recognizing the interdependencies

The next step in adopting a better mindset is examining the ways in which the solution areas are interrelated - rather like the six sides of a Rubik's cube.

Before making any investment in new data centre technologies, this company needs to invest in a new mindset. After all, if you can't find the time and money to do it right the first time, where are you going to find the resources to do it all over again?

l Nolan oversees Forsythe's networking and security businesses, as well as its consulting service practices.

Solution: Consolidation planning

The expert: Gary Hull, director of sales ANZ, Raritan Australia

The upshot: Firstly, the challenges senior managers face with NoName Enterprises' large and complex IT infrastructure environments are many and varied.

The ideal solution for NoName Enterprises is to be able to manage and control IT devices from one single location either from its New York HQ to literally anywhere in the world. As NoName Enterprises' business grows, equipment increases in both size and complexity, and it becomes harder and harder to find the right tool to manage high-tech and expensive equipment and in this case to properly manage data centre environments. To add to those challenges, NoName's IT managers have to manage equipment from numerous locations and it's not uncommon to find IT managers dealing with 'chaotic' situations where they no longer effectively manage mission-critical data centre environments.

The awareness of having a suitable data centre infrastructure has grown significantly in recent years, and many managers of IT infrastructure now actively design appropriate long-term data centre infrastructure solutions. Deploying KVM (Keyboard, Video, Mouse) technology is recommended for NoName Enterprises as an essential part of the foundation of effective data centre design and management.

Rather than creating new and duplicated server group environments, No-Name's IT managers can simply add the new equipment into centralized environments and expand when it needs to.

The right infrastructure allows managers to access all equipment from any remote location either 'over IP' or 'locally', and be able to have a 'lights-out' data centre to reduce the unnecessary physical presence in the data centre to minimize security risk.

The consolidation of server access provides the ability to access servers with different operating systems, varying platforms, even to access serial devices such as routers, HVAC, physical security and other serially controlled devices. NoName Enterprise IT mangers can literally access and control any kind of equipment within their heterogeneous data centre through one single interface. The result is no more logging onto multiple management solutions (both software and hardware) with their own individual interface, and no more duplication of licensing costs.

The solution that managers of No-Name Enterprise IT infrastructure need to consider is a solution which is both scalable, flexible and has expandable capability to meet the future needs of a growing organization without the need of restructuring its entire IT infrastructure. Most importantly the solution should also offer expandability to consolidate access and control multiple data centres with the same level of infrastructure complexity.

Server and network infrastructure

NoName's efforts to accommodate varying and different customer needs within data centre environments often results in a hybrid server environment with a range of mixed brands and mixed capability. Even though at the time the equipment has served a short-term purpose in satisfying customer needs, it's created a complex challenge for effective, long-term server management. These complex management challenges can be reduced by consolidation, centralized access, and control and management of all equipment within these environments. For mission-critical environments and projects involving this level of IT infrastructure complexity, the decision becomes not so much a choice, but an essential step and insurance for managers of IT infrastructure.

Remote data centre access

*oName's centrally located, high-level data centre is in desperate need of remote access to servers. The employees may be located at headquarters or in another state or country, but with Raritan's KVM-over-IP technology they can access the data centre and 450 servers. NoName's data centre managers need connections that operate over a network - WAN, VPN or the Internet.

Applications integration

*oName's problem of a mish-mash of server application and sprawling server farms needs a solution that controls these servers from one location and a KVM switch can control any server application including: Windows, Linux, Unix , Novell, legacy, Solaris, Sun servers.

Further application integration occurs again at the need for remote access to servers in data centres; NoName's IT staff are often also responsible for managing servers and networking equipment located in their branch offices. This remote office equipment can consist of a wide variety of devices including:

  • routers/switches
  • firewalls
  • network appliances
  • HVAC controls,
  • security systems
  • telecomms controllers and
  • headless servers (Unix, Linux, and Solaris)
Usually, employees located at these remote sites do not have extensive IT expertise and therefore do not have the skill set to troubleshoot and manage the infrastructure. In some cases, data centre staffers use remote access software solutions to address remote infrastructure maintenance.

However, software solutions only work if the network is up and, in the case of servers, if the server OS is healthy. When the network is down, or when the server OS has crashed, onsite employees are often asked to "go to the server closet and press the reset button", and then if the router or server does not come back up, cost and time has to be incurred to travel to the site.

Storage infrastructure: Where to start?

Consolidate and control NoName's 450 servers and storage-area-network (SAN). By integrating a series of 14 Dominion KVM switches and a Raritan Command Centre (which acts like the server overlord) across the storage infrastructure lets NoName see the big picture in terms of storage management, security and network performance or outages. Installing one 32-port switch, instead of several switches, to handle all the equipment in a densely-populated servers not only simplifies storage management, but also preserves rack space and reduces associated costs, such as HVAC and electricity. Such switches also act as the foundation to protect NoName's IT investment because they can scale easily to meet the "big changes" for data centres and storage infrastructure.

Solution: Information transformation

The expert: Stephen Nunn, partner, Accenture

The upshot: this company needs to undergo a two-phase transformation that will take it from its current state of disarray to a flexible, on-demand-style new data centre architecture.

Today's global organizations are inherently complex. Nowhere is this more evident than in an organization's data centre. The scene is often chaotic: data centres with hundreds (if not thousands) of servers, storage units, multiple databases and dozens of operating systems - all needing to work together seamlessly to satisfy 24x7 user demands and business process application requirements. The problems faced by this major consumer goods manufacturer come as no surprise.

This organization needs to take a holistic view of its infrastructure and move to a flexible but secure utility-style computing model through an information transformation program. The company's objectives should be to gain control of its assets quickly, to improve its ability to support the business strategy, to reduce costs and self-fund longer-term IT-enabled improvements that will drive greater business performance. Here's a two-phased approach:

Phase 1 - IT consolidation

This involves consolidating, standardizing and integrating a number of critical IT components including the data centres, networks, applications and workplace.

Doing this means starting with an infrastructure strategy and plan. The company can use such a plan as a blueprint for transforming the current environment to a utility-centric computing infrastructure through a number of structured and controlled releases.

One of the organization's key objectives should be moving to a smaller number of centralized and highly resilient data centres, with consolidation of most of its distributed servers within a smaller number of centralized servers. Typically we would expect a company such as this to reduce its overall server population by 30 percent.

Ideally the company would also undertake an application rationalization program. The program's intent would be to analyze the need for each application and to determine what additional initiatives can be undertaken.

The company should consider a Wintel rationalization program to categorize the servers and address consolidation and standardization by server category - for example, file rationalization or mail consolidation. The company should also consider virtualization software, such as that from VMware, the consolidation of business applications and the minimizing of remote servers. In addition, Unix-based servers should be categorized and analyzed for the type of applications being hosted and the development of a more consolidated environment. This would result in fewer platforms required for the same application portfolio.

For storage, this company should transition from its mixed environment to a tiered model that would enable it to provision, categorize and move data between tiers in a seamless manner. With tiered storage, the company would be able to maximize utilization and cost.

A prerequisite to effective data centre consolidation is a WAN with sufficient capacity and resiliency so the IT infrastructure can be centralized while effective network connectivity for user access is maintained. If the company had not already done so, it should move to MPLS for the WAN - achieving not only cost savings but also flexibility in terms of capacity.

The company must also review its telephony strategy and consider an IP convergence program. Initially, it would use the MPLS network to provide toll bypass between PBXs and then as appropriate replace the telephony infrastructure with IP-enabled PABXs.

As part of any IT consolidation program, the desktop should be evaluated to see if alternate methods of providing desktop services, such as thin clients, could be provisioned. Standardization of the desktop would also be of high priority with a program to migrate all Wintel-based applications onto Windows 2003.

Along with the technology initiatives, the company must model the underlying IT organization around the consolidated IT infrastructure. The organization should be underpinned with robust IT Infrastructure Library-based operational processes and management tools that are able to monitor, alert and, wherever possible, implement remedial actions pro-actively, before incidents or problems.

Phase 2 - Infrastructure virtualization

This organization then would need to introduce a virtual layer into the newly consolidated and standardized environment. This layer - which would lie between the company's applications and its hardware - would capture a uniform snapshot of the IT environment and pool and connect IT resources that had been separated historically. On top of this virtualized platform, the organization could install software to help manage and provision hardware resources and to balance and consolidate workloads continuously. The organization would be able to:

  • Move applications among various processing resources within its data centres to optimize performance across the enterprise.
  • Allocate capacity and resources - such as utility-based data centres, mobile work scenarios, workload management and IP (voice and data) services - dynamically and automatically.
  • Reduce the complexity of managing hardware from multiple vendors and eliminate maintenance "downtime".
  • Implement a simplified interface between IT resources and business processes.
  • Measure provisioning time for new applications in seconds (not days) and response times for change requests in minutes.
A number of emerging security technologies will become increasingly critical in an infrastructure transformation program including identity management technology.

The result would be a flexible, highly secure, on-demand architecture that is aligned with the business.

Solution: Layer by layer

The experts: CDW technical specialists

The upshot: This company can effectively transform its IT operations into a strategic corporate asset by widely embracing new data centre technologies and principles.

New data centre principles help organizations work smarter and strive for an environment where any IT asset can be managed securely from anywhere. With that in mind, CDW has compiled some sample recommendations to help this fictitious consumer goods company transform its IT operations into a strategic corporate asset.

Applications

  • Move applications off desktop clients to server farms. Resulting benefits would include the ability to facilitate secure remote access; improve network manageability and reliability; ease upgrades and new software deployments; enhance support capabilities; and give IT managers more network control.
  • Consider a Citrix environment, which would be suitable for accomplishing these objectives.

Storage

  • Consolidate the Wintel environment onto high-performance eight- or 16-way Intel servers running virtualization software.
  • Implement a Fibre Channel-based storage-area network in Boston to accept replication from the Fibre Channel-based SAN in New York. The Fibre Channel-based SANs would also act as back-end storage for the Intel servers running virtualization software. (While Fibre Channel-based SANs can be complex and expensive, performance benefits - especially compared with IP SANs - would outweigh costs.)
  • Use terminal service software, such as Citrix MetaFrame, for on-demand access and single sign-on password capabilities. The centralized architecture would provide the greatest efficiency for management of resources.
  • Deploy storage resource management software to clean up redundant and legacy data that the company no longer needs.
  • Backup to disk using a Virtual Tape Library device in concert with backup software, then offload to tape. This would expedite backups.
  • Implement an information lifecycle management strategy to prioritize data so that it is stored on the most appropriate media for saving money and for regulation compliance.

Bandwidth

  • Use multiple carriers that can each offer a QoS-based service-level agreement (SLA). This would allow carrier redundancy among mirrored environments, providing alternate backbone routes in the event of carrier failure.
  • Link each data centre to the corporate WAN via a global MPLS architecture to improve capacity, speed and quality of voice and data transmissions.
  • Telephony/videoconferencing

  • Deploy a Tier-1 backbone between sites and offer a SLA for packet loss, jitter and latency.
  • Consolidate to a single PABX brand.
  • Establish a common PABX architecture that includes WAN upgrades that feature lowest cost routing and MPLS.
  • Implement a video bridge with distributed endpoints in the WAN.
  • Global recommendations

  • Deploy facility management software such as Datatrax Forseer or APC InfraStruXure to integrate UPS, generators, power strips, A/C and other facility devices into one manageable GUI.
  • Implement racks with vented front and rear doors, and use three-phase power strips to help minimize costs of balancing loads within racks and on power circuits. These racks would allow for temperature and humidity monitoring.
  • In the New York and Boston data centres:

  • Install a generator with an automatic transfer switch for long-term runtime.
  • Install a UPS on any outlying distribution switches and at the desktops.
  • Use online/double conversion UPS to condition power fully and ensure no interference with IP telephony or harmonics.
  • Install an online/double conversion UPS to condition power from a utility or generator and to provide transitional uptime when a power outage hits.
  • Install computer room A/C system in conjunction with a raised floor environment to address heat and humidity concerns.
  • In Sydney and London data centres:

  • Use UPS at the desktop and remote switch levels.
  • Use generators depending on office size.
  • Use modular UPS for the server room and core switching/telephony equipment.
  • Use raised flooring and A/C solution as in New York.
  • Security

  • Create and enforce corporate security policies, including for wireless users and for exchanging corporate data with partners/suppliers.
  • Decide on a standard data centre operating system to enable a central management option for patching and maintaining systems.
  • Upgrade VPNs to current-generation service/security routers. This would allow for faster throughputs and high availability for incoming multicarrier lines.
  • Deploy a high-throughput intrusion-prevention system to prevent bottlenecks in front of the server farm and malicious traffic from getting into CRM systems. Alternatively, add an intrusion-detection system blade on some switches to help maintain core speeds.
  • On the Web server side, use an application-intelligent firewall to offer improved traffic reporting and prevent Web attacks.
  • Join the newsletter!

    Error: Please check your email address.

    More about Accenture AustraliaACTANZ Banking GroupAPC by Schneider ElectricCandle IT & T RecruitmentCDWCitrix Systems Asia PacificDatatraxEndPointsHPIntelKaplanKVMMicrosoftNolanNovellOraclePromiseProvisionRaritanSalesforce.comSECSecurity SystemsSiebel SystemsSpeedTelstra CorporationTHINKstrategiesUnisys AustraliaVIAVMware AustraliaVodafone

    Show Comments