How to use application-defined automation tools to successfully deploy cloud apps

Application-defined automation provides an end-to-end deployment/provisioning process in which the application defines and configures the infrastructure

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

The cost and scalability benefits of cloud computing are appealing, but cloud applications are complex. This is because they typically have multiple tiers and components that utilize numerous technologies; as a result, applications can end up scattered across a variety of execution environments. To ensure successful cloud application deployment and management, the key is to use application-defined automation tools.

Today, many aspects of cloud application deployment are manual. As a result, IT organizations must deal with:

1.         Application Proliferation: While the number of applications is already huge, it is still growing. In order to meet enterprise demand, IT continues to roll out more applications.

2.         Cloud Diversity: Execution environments are numerous, diverse and expanding as companies embrace software-defined data center and cloud technologies.

3.         Business Velocity: The rate of feature updates is accelerating as competitors race to deliver digital services to their customers.

Consequently, IT finds itself in the gap between applications and infrastructures, struggling to keep up with the increasing demand for new applications and upgrades to current ones. The good news: IT is gaining ground using automation to handle the increasing demand and faster pace while not succumbing to application chaos.

Infrastructure automation a step forward

Historically, data center infrastructure comprised dedicated hardware resources that infrastructure teams manually provisioned in response to an application team’s request. The application team would then manually deploy and configure the application. If the infrastructure did not end up meeting the requirements of the application, a back-and-forth would ensue between infrastructure and application teams. The process was inefficient, slow and error-prone. But in a slower paced world where stability and cost control were primary IT concerns, it didn’t matter.

The software-defined data center is a major step forward. Here, the server, storage and network devices are virtualized and can be provisioned and configured programmatically through application programming interfaces (APIs). Provisioning and configuration is accomplished through scripting and workflows. This approach is faster, more flexible and less susceptible to errors. Given the application proliferation, cloud diversity and business velocity IT is managing, this is a significant improvement over manual processes.

Despite its advantages, software-defined technology has shortcomings in helping IT keep up with the fast-moving application conveyor belt. First and foremost, infrastructure provisioning is still separate from application deployment. There are still separate infrastructure and application teams, each using different tools. The teams still face inefficient back-and-forth exchanges to resolve mismatches. Preconfigured infrastructure blueprints offer a partial solution. However, those blueprints are infrastructure-focused and the application must conform to the available infrastructures.

A second disadvantage is that APIs and best practices differ across virtualized datacenter, private cloud and public cloud environments, so the software-defined layer is cloud specific. Consequently, both the infrastructure and application teams must create scripts or workflows specific to each target infrastructure. The teams need separate tools and skillsets for different infrastructure technologies, such as Amazon Web Services (AWS), Microsoft Azure, OpenStack, VMware and Cisco.

Finally, because scripts and workflows are cloud specific, the application gets locked into a specific cloud environment. While that may be good for service providers, it’s not good for IT because it might be necessary to move the application to another environment to meet changing business needs. Moving environments requires developing a different set of scripts or workflows, reducing agility and increasing costs related to version controlling and maintaining automation artifacts.

The result is strategic mismatch. Enterprises pursue cloud environments to lower costs, gain agility and scale rapidly. However, these benefits are undermined by the separation of application and infrastructure processes coupled with cloud lock-in.

The age of containers: Not a sure cure

Container technology promises to speed infrastructure velocity even further. But Gartner VP and Distinguished Analyst for Servers and Storage, Thomas Bittman, identified five drawbacks to container technology during his presentation at Gartner's IT Operations Strategies and Solutions Summit 2015 in Orlando. According to Bittman:

1.         Container technology might not be right for all tasks.

2.         Dependencies can cause problems. Placing dependencies on containers limit portability among servers.

3.         Weaker isolation between containers means that flaws and attacks have potential to bring down into an underlying OS and carry over into other containers.

4.         The ability to spin up and duplicate containers at an unprecedented rate increases the risk of container sprawl.

5.         The kind of tools needed to monitor and manage containers are still lacking in the industry.

Because each infrastructure automation technology has its advantages and disadvantages, IT requires the flexibility to select the optimum execution environment for each of its applications individually.

Completing the journey: Application-defined automation

Even with infrastructure automation, there is still a gap between application deployment and infrastructure provisioning. To eliminate the gap, IT must integrate application deployment with infrastructure provisioning in a unified, holistic and automated process. Application-defined automation delivers a new approach that provides an end-to-end deployment/provisioning process in which the application defines and configures the infrastructure instead of having to fit into a preconfigured infrastructure, blueprint or template.

Application-defined automation permits users to choose the optimum execution environment for each application. Application-defined automation speeds the development process since application teams no longer have to go back and forth with infrastructure teams to align the infrastructure with the needs of the application. In addition, application-defined automation systems manage the deployed applications over their entire lifecycles, from deployment to retirement.

Traditional cloud strategies are infrastructure focused, that is, geared to accelerating infrastructure velocity. While infrastructure velocity is a key advantage of cloud, it’s only part of the value that cloud is capable of delivering. To tap the cloud’s full potential, IT needs to also accelerate application velocity. Application-defined automation makes that possible. With application-defined automation, IT can rev up both application velocity and infrastructure velocity and, as a result, enable enterprises to excel in the increasingly digital business environment.

Cope is responsible for business development, strategic alliances, brand management and integrated marketing communications at CliQr.

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Amazon Web ServicesAWSCiscoGartnerMicrosoft

Show Comments