Serverless computing’s future is now – and why you should care

IaaS costs of $5,000-$6,000 per month might drop to $200/mo. with serverless computing

Although vendor-written, this contributed piece does not advocate a position that is particular to the author’s employer and has been edited and approved by Network World editors.

Serverless computing, a disruptive application development paradigm that eliminates the need for programmers to code and plan for performance and scale, is rapidly gaining momentum for event-driven programming. Organizations should begin exploring this opportunity now to see if it will help them dramatically reduce costs, while ensuring applications run at peak performance.

To understand the benefits and limitations of serverless computing, it’s helpful to review the evolution of application development. Until recently, applications were constrained by data center hardware. Developers had to hardcode each application for the type and size of server it would run on. They would also have to code for how the application would distribute its workload across a server cluster.

For event-driven applications that wait for a call to action before responding, failing to accurately account for the hardware environment could lead to serious consequences – on the one hand, expensive but idle capacity; on the other hand, delayed application performance, uneven server workloads, or even outright server (and application) failures.

For example, consider an application that allows users to upload photographs for automatic redeye removal. The application servers sit waiting for uploads. If the number of servers dedicated to the application is over-specified and relatively few photos are uploaded, the servers spend most of their time idle, a significant waste of resources. However, if the number of servers is under-specified, users will experience significant delays during peak usage. The dilemma is that if you plan for adequate performance at peak usage, you’ll likely have significant idle time during non-peak usage. If you plan for “average” usage, you will undoubtedly deliver a poor customer experience during peak usage.

In addition to coding for sizes and types of servers, developers also had to plan ahead for backups and future growth to ensure uninterrupted performance while scaling. This forecasting effort required more time spent on wasteful, undifferentiated “heavy lifting” rather than on developing new features and functionality.

With the introduction of infrastructure as a service (IaaS), developers no longer need to scheme about the hardware on which their applications will run. Instead, they can rent servers from vendors such as Amazon Web Services (AWS) and Microsoft Azure, and deploy their applications to the cloud. IaaS has certainly reduced some of the heavy lifting for developers, freeing them up to concentrate on application functionality. Tasked with developing a 5,000-server analytics project? It takes just a few lines of code to set it up on an IaaS. In the past, most developers could not have spent the time and effort to consider such an application; with IaaS computing, it’s easy and affordable.

But even with IaaS, developers still need to worry about performance and scaling. For example, to ensure performance, some companies reserve unnecessary, excessive IaaS services and are then saddled with unnecessary expenses for the life of the IaaS contract. Or, to reduce IaaS monthly costs, companies will sign a year-long agreement but are then locked into those servers for the life of the contract. If the developer underestimates the growth in application usage, the application may still choke during peak times.

Serverless computing eliminates all of these headaches by allowing developers to write code without worrying about servers at all. Instead, developers upload only their core functions to the IaaS, which automatically starts and stops virtual machines as necessary to support demand. Serverless computing eliminates the need to code for server size, load balancing, and scalability, or to plan for backups and future growth.

Going back to the redeye removal application: with a standard IaaS, a developer codes the application and tests it on a local computer, rents a server from an IaaS vendor, makes sure the server has all of the recent patches (an ongoing requirement), and then starts planning strategically, fiscally, and contractually for scale. With serverless computing, the vendor publishes an API that allows the developer to upload the function, and the vendor handles all the server maintenance and scaling. The vendor then provides a URL for user access to the application. That’s it.

The cost savings of the serverless computing model can be extraordinary. For example, we can have an application currently running in a serverless computing environment for $200 per month. Running the same application in an environment where we paid the IaaS vendor for the servers would cost between $5,000 and $6,000 per month.

Given its simplicity and cost savings, serverless computing would seem to be the ideal development environment, but there are some important considerations to note. First, you need to place a lot of trust in the vendor. The benefit of serverless computing is that you don’t have to sweat the details; the downside is that you don’t know anything about the details. You must have confidence that the vendor can instantly scale as needed without degrading performance. So today, most organizations offering an enterprise-class, low-latency, high availability service may still prefer to manage their own servers, or at most reserve servers from an IaaS. For applications that don’t have such stringent requirements, serverless computing may already be a terrific, lower cost alternative.

Another limitation of serverless computing is that if a company has a large application with a lot of functions to stitch together, there is no compiler to create a single executable file. Instead, each function is uploaded separately and must be stitched together by web service calls. This is much less efficient than linking to a function in the same executable. Testing and debugging is more challenging, since you can’t run the complete application in a local environment. Finally, only a limited number of programming languages are currently supported by IaaS vendors, which could mean additional training for the existing team or the need to bring on new team members.

The adoption rate for serverless computing will likely accelerate dramatically as vendors overcome or eliminate these obstacles. Eventually, even the most mission-critical workloads will be moved to this environment. In the meantime, companies that want to explore their serverless computing options can choose something of a hybrid approach using Docker containers which wrap software in a complete file system that includes everything needed to run the application: the code, the runtime environment, system tools, system libraries, etc.

Ultimately, every company benefits from having developers spend less time worrying about infrastructure and more time implementing differentiated features and functionality. Whether it’s the start-up that goes from idea to product in a fraction of the time at a fraction of the cost, or an existing business that can drive down costs and increase agility, “serverless computing” will likely soon be just “computing,” and a programmer born today may never encounter the term “server” at all.

Avalara’s Compliance Cloud platform helps businesses of all sizes manage complicated and burdensome tax compliance obligations imposed by state, local, and other taxing authorities throughout the world. Each year, Avalara processes billions of indirect tax transactions, files hundreds of thousands of compliance documents and returns, and manages millions of exemption certificates and other compliance-related documents.

Join the Computerworld newsletter!

Error: Please check your email address.

More about Amazon Web ServicesAWSMicrosoft

Show Comments