Reading Time: 10 minutes
Serverless computing is a model of cloud computing in which the service provider operates the server and dynamically manages the allocation of computing resources such as processing, memory and storage. The provider bases pricing on the resources applications actually consume, rather than the tradition pricing model that requires the user to purchase resources before using them. Serverless computing simplifies enterprise software development by concealing administrative tasks such as capacity planning, maintenance and scaling from the developer. Developers can also combine serverless computing traditional code deployment styles.
Serverless computing is quickly becoming the preferred method of deploying code among software developers. This is largely because they can build code without concern for the underlying infrastructure, especially the servers. Developers looking for a serverless solution often choose from AWS Lambda, Google Cloud Functions and Microsoft Azure. These solutions have a number of significant differences in both features and performance.
Amazon Web Services (AWS) Lambda is an event-driven platform that’s part of the large AWS family of services. It only executes code when a triggering event occurs and manages the resources the code requires. Amazon introduced Lambda in 2014 as an alternative to AWS Elastic Compute Cloud (EC2) for smaller applications that need to respond to events. Lambda supports many programming languages and scripts, including C#, Go, Java, Node.js, Python and Ruby. Lambda also supports custom runtimes as of 2018, allowing developers to run Lambda in their preferred language.
Appropriate uses for Lambda include uploading objects to Amazon S3, updating DynamoDB tables and responding to sensor readings from internet of things (IoT) devices. Custom HTTP requests can trigger Lambda functions, so developers can also use Lambda to provision back end services. They can configure these requests in AWS API Gateway, which can authenticate and authorize them via AWS Cognito. The AWS Lambda team also added provisioned concurrency to Lambda in 2019, which is a feature that allows Lambda functions to respond to events within hundredths of a second.
Lambda’s administration and scaling functions are fully automated, which is one of the most important features to consider when selecting a serverless platform. It performs its own monitoring and logging functions through CloudWatch, relieving developers of this responsibility. Lambda also deploys containers with the appropriate environment automatically when a triggering event occurs, so that each instance of a function has its own container. This platform is thus able to scale as needed to support incoming requests without the developer needing to configure anything.
The pay-as-you-go pricing model that Lambda uses means that developers no longer need to worry about the code’s underlying infrastructure, especially containers and servers. They only need to pay for the functions that executed and the time they ran. Amazon reports that this pricing model can save about 17 percent over EC2.
Developers can also build their own back end services in Lambda. They can easily create code in Lambda’s built-in editors, download archived code and work with it in Git repositories.
Lambda’s automatic scaling feature is one of this serverless solution’s biggest advantages. Developers can handle incoming requests on a first-in-first-out (FIFO) basis, allowing them to maintain a correct change history. Developers can also scale applications by creating events that execute their code.
Lambda’s smoothly integrates with other AWS services, allowing developers to build strong applications. This advantage is a major consideration for existing AWS developers.
Operational management is also easier in Lambda than other serverless systems since it’s already part of AWS. These tasks are generally fast and user-friendly in Lambda.
Lambda was the first commercially successful serverless platform, but it’s still a relatively new solution. Its development has focused on ease-of-use and performance thus far, but AWS’s known security vulnerabilities could become a significant disadvantage for Lambda in 2020.
Developers have no control over their environment in Lambda, which can cause problems for some applications. While this issue applies to all serverless platforms to some extent, it’s particularly true for Lambda.
Amazon is continually adding new features to Lambda, which means its documentation is typically behind its actual capabilities. This problem can make projects more difficult to complete, especially for developers new to Lambda.
Reasons to Choose AWS Lambda
Developers and users often choose Lambda for its pay-per-use pricing model, especially organizations accustomed to buying or leasing their own hardware. Factors that determine the cost of using Lambda include the number of function instances, their duration and the memory they require.
Developers must consider issues such as the availability, cost and maintenance of servers when working in traditional software environments. The opportunity to leave these server related issues behind is another incentive for selecting Lambda.
Lambda’s support for many languages is one of the most common reasons for choosing it over other serverless platforms. The most recent updates in this area include support for Python 3.8 and Ruby 2.7+, which you can read more about on here.
Lambda is the best choice for building serverless back end services due to the large number of AWS services available. Back end services include apps for IoT devices, mobile devices and websites.
CloudWatch provides built-in, scalable metrics for Lambda developers. This service is an effective mechanism for detecting and resolving performance problems.
Successful commercial examples of Lambda include Netflix, which is currently the most popular app for watching movies and TV shows. Netflix uses Lambda to achieve the performance it requires, including processing speed, number of customers and storage space for high-quality content. Financial Engines is a website built with Lambda that offers financial tips and is able to process up to 60,000 requests per minute. Lambda has also increased the processing and updating speed of The Seattle Times, a local news website. Additional case studies of companies that have improved their business with Lambda are available here.
Google Cloud Functions
Google Cloud Platform (GCP) is a suite of cloud-computing services that runs on Google’s internal infrastructure, which Google also uses for end-user products like Gmail, Google Search and YouTube. GCP also includes management tools and cloud services such as application programming interfaces (APIs) for machine learning (ML), enterprise versions of the Android and Chrome mobile operating systems (OS) and enterprise mapping services.
App Engine was Google’s first cloud-computing service, which develops and hosts web applications in in Google’s data centers. The primary purpose of this Functions-as-a-Service (FaaS) is to run event-driven code written in Go, Node.js or Python. Google Cloud launched in 2008, although it wasn’t generally available until 2011.
Google Cloud Functions (GCF) launched in 2017, and Google currently has over 90 Google Cloud products. GCF allows developers to write and execute their own code at any time and from any location. The Google Cloud computing system enables event triggering for GCF through its cloud functions, cloud subtopics and internal event bus.
GCF’s key features include its scalability and reduction of infrastructure complexity. In particular, deploying functions in GCF is less complex than it is for Lambda. Service providers can perform this task with one step in GCF, making the deployment and management of code much simpler. The elimination of manual deployment also streamlines application development, thus reducing its costs. GCF also includes distributed monitoring, logging and tracking as an integral part of the system.
All serverless platforms are scalable, but GCF has the edge over others in this area. Its scaling of containers is automatic, fast and flexible. It also manages software dependencies more effectively by automatically installing them for users. GCF users don’t have to go through the local “vendoring” process that Lambda requires, making the problems that dependencies can cause less likely.
Reasons to Choose GCF
GCF is most useful for improving serverless applications in terms of processing time and simplifying the management of technology. Real-time processing is also a strong reason to select GCF, including data, files and streams. Additional uses of GCF include the analysis of images and videos. GCF is newer than other serverless solutions, but Google has the resources to catch up to the other providers in other areas should it choose to do so.
Large, successful implementations of GCF include HomeAway, which is an app that lets you book apartments, hostels and hotels in a few clicks. It also offers image analysis with rapid response times. Lucille Games is a website built with GCF that develops games and projects. It also handles traffic quickly and provides high-quality image processing.
Microsoft Azure Functions
Microsoft Azure is a cloud-computing service designed to build, test, deploy and manage applications through Microsoft’s data centers. It provides these services as IaaS, SaaS and PaaS while supporting a variety of programming languages, frameworks and tools. Azure’s components include those from Microsoft and other parties. Azure was initially released as Windows Azure in 2008, but was renamed to Microsoft Azure in 2014.
Azure Functions offers more options for deploying functions than either GCF or Lambda. For example, Azure Functions can connect to other development components like GitHub, DropBox, Kudu Console and Visual Studio. Azure Functions is also better than Lambda in terms of continually deploying code and integrating it into existing infrastructure, according to Moesif. Additional benefits of Azure Functions include its debugging capability, which is especially advantageous during a project’s early stages. Many users also report a better experience when working with Azure Functions.
The settings in Azure Functions require more tooltips than other platforms to ensure users understand what they do. Azure Functions users are also more likely to require prompt technical support since this service has only been available since 2017. While Azure is open-source, it still has comparatively few tools to support this development model.
Reasons to Use Azure Functions
In comparison to GCF, Azure Functions provides greater support for functions with Serverless APIs via Microsoft’s .NET framework and Node.js. Azure Functions also has built-in artificial intelligence (AI), which can provide better automated service. Furthermore, this platform has the best ML system, which is particularly useful for businesses just getting started.
FUJIFILM is a photography website built on Azure that handles a high content volume. Relativity is a software solution that uses Azure to manage its environments. It has a good design with strong image and video processing capability.
The process of choosing a serverless platform should include a direct comparison of their features and performance.
The following tables summarizes the most significant differences between AWS Lambda, GCF and Azure Functions, including costs and features:
|Functionality||AWS Lambda||Google Cloud Functions||Microsoft Azure Functions|
|Costs||Free for the first million requests, $0.20/million requests thereafter plus $0.00001667/GB-sec||Free for the first two million requests for free, $0.40/million invocations thereafter plus $0.0000165/GB-sec||Free for the first million requests, $0.20/million executions thereafter plus $0.000016/GB-sec|
|Number of functions||Unlimited||1,000 functions/project||Unlimited|
|Executions||1,000 parallel executions/account||1,000 parallel executions||Unlimited|
|Maximum execution time||900 seconds||540 seconds||600 seconds|
|Supported languages||C#, Java, Go, Node.js, Python, Ruby||Go, Node.js, Python||C#, F#, Java, Node.js, Python|
|Triggering Events||CloudWatch, DynamoDB, HTTP, Kinesis, S3, SES, SNS, SQS, SES||Cloud Pub/Sub, Cloud Storage, Firestore/Firebase, HTTP||Cosmos DB, Event grid/hub, HTTP, IoT Hub, Service Bus, Storage and others|
A 2018 study by USENIX analyzed the performance of AWS Lambda, Azure Functions and Google Functions across a variety of parameters. Coldstart latencies and instance lifetime are some of the most important performance criteria to consider when selecting a serverless solution.
Coldstarting is the process of launching a new instance of a function. This may involve various tasks such as launching a new container, configuring a runtime environment and deploying a function, depending on the platform. The efficiency with which platforms perform these tasks varies greatly, resulting in a large difference in coldstart performance. Coldstarts are a critical performance factor because they directly affect application responsiveness, which in turn affects user experience.
The following chart compares the performance of AWS, Google and Azure for various configurations:
Fig. 1: Coldstart latencies for AWS, Google and Azure
The AWS configurations in the above table used 128 Mb and 1.5 Gb of memory, which show a much smaller improvement in coldstart performance than the two Google configurations of 128 Mb and 2 Gb. For example, the median latency for the two AWS tests was 265 and 250 ms, while the median latency for Google dropped from 493 ms to 111 ms. AWS and Google both allocate processing capacity in proportion to memory, but adding memory to Google reduced latency far more than it did for AWS.
Azure always assigns 1.5 Gb of memory for each instance, so this comparison wasn’t possible for Azure. However, the median coldstart latency for Azure was 3,640 ms, which was much more than either AWS or Google. This difference is due to a range of design and engineering issues that Microsoft engineers are already aware of and working to improve.
The USENIX study also measured latency variations over time for functions with 128 Mb memory. The AWS functions were written in Python 2.7, while the Google and Azure functions were written in Node.js v6. The tests ran for one week, with measurements taken every 10 seconds. The charts below show the results:
Figure 2: Coldstart latency variations for AWS, Google and Azure
USENIX took the measurements in the above charts in late 2017, which show highly stable latencies for AWS. Google’s latencies were also stable, except for a few spikes during which latency times nearly quadrupled. In comparison, Azure’s latencies were highly variable, ranging from 1.5 to 16 seconds. Furthermore, even this lowest figure is significantly more than the latencies for either AWS or Google.
USENIX also repeated these measurements May 2018. The results were similar for AWS, but very different for Google and Azure. For example, Google’s coldstart latencies were four times slower in the second test, likely due to that company’s ongoing infrastructure upgrade that began in February 2018. In contrast, Azure’s latencies dropped by a factor of 15, although the study doesn’t postulate an explanation for such a dramatic improvement. These difference illustrate the importance of continuous measurements for characterizing the long-term performance of serverless platforms.
An instance’s lifetime is the longest period of time that it remains active. Long lifetimes are preferable for users because it minimizes the impact of coldstarts on overall performance. However, a serverless platform may terminate a function instance even when it’s still in use.
The USENIX tests included functions with the same memory sizes for each platform as those in the coldstart tests shown in figure 1. The tests also included request frequencies of once every 5 seconds and once every 60 seconds. The tests ran for one week or until they had collected 50 lifetimes, whichever was longest. The diagrams below shows the ccumulative distribution function (CDR) for instance lifetimes on all three platforms:
Fig. 3: CDFs for instance lifetime of AWS, Google and Azure
The charts above show that Azure instances have much longer lifetimes than those of either AWS or Google, especially for high request frequencies. The distribution for AWS instance lifetimes is fairly linear from 0 to 8.3 hours, with a median of 6.2 hours. Request frequency is the most significant variable for AWS, as greater request frequencies tend to shorten instance lifetimes. However, memory has a greater effect on instance lifetimes for Google such that more memory increases instance lifetime. This tendency results in a nearly linear distribution of instance lifetimes with large memory and low request frequency.
Serverless computing generally allows users to develop software more quickly than server-based environments. However, the specific features and performance can vary considerably by vendors such as AWS, Google and Azure. Developers almost always have a programming language they prefer to use, so the supported languages is typically a critical feature for a serverless platform. Users also need to consider the lower coldstart latency of AWS and Google against the longer instance lifetime of Azure.