model change how you deploy and manage cloud apps, but it also changes how you craft a cloud cost management strategy.
With serverless computing, you pay each time you execute an application or application component, for the period of its execution. In nearly all cases, the other costs associated with cloud computing, such as charges for traffic in and out of the cloud and for cloud-native web services, like database features, also still apply.
It’s easy to see how this can be beneficial. In the traditional public cloud model, applications that sit and wait for user activity incur a CPU or virtual machine cost, even if they do nothing for most of the time. But it is difficult to set boundaries in the serverless model. If you’re charged for every fraction of a second an application runs, could a massive load generate massive costs? And how should you budget for that?
Build a serverless cloud cost management strategy
Efficient cloud cost management for serverless computing demands a systematic, top-down approach. First, carefully review your cloud provider’s pricing plan. The serverless cost is only the CPU component of cloud pricing; other cloud costs will likely remain. Nearly all serverless providers base their prices on the number of application activations — or events — and the time and resources needed per event. Get a price for each.
Second, plan out your serverless application. Most serverless computing models use a form of functional computing or microservices, which requires new code or adaptations for many applications. Know how many different components you have in your serverless application — as each is priced separately — and how much memory each requires. Some of that new application code will use other cloud provider web services as well, such as database or internet of things services. Be sure to know which features your application will use, because providers charge for all them.
Estimate serverless costs
CPU usage depends on two factors: how often an application component runs — providers call these events, requests or activations — and how long it runs with each activation.
In general, serverless reduces costs for applications with limited workloads and increases costs where workloads are heavy. It’s most cost-effective to use serverless when workloads are highly variable and need to run without significant delay. To gauge costs, take measurements from current applications or estimate business activity. This data will tell you how often serverless components will run and help you form a cloud cost management strategy.
It’s more difficult to estimate how long it takes an application to process workloads, unless you know how your serverless applications and their components work. Conduct tests to measure the execution time of these components. These early tests will also indicate the resources, such as memory, that each component needs, which should help estimate pricing. A large-scale pilot test is critical for serverless computing.
You can also use a serverless cost calculator for basic cost estimates, but many of these tools are still in their early phases of development or in beta. Some serverless cloud providers, including Amazon Web Services (AWS), offer other tools to help gauge serverless computing costs. Build a simple spreadsheet to calculate pricing, but factor in any nonserverless web services that your application components use.
Estimates are fine, and essential to set a baseline for serverless computing costs, but they don’t account for cost increases due to either unexpected loads or improper application design. Remember that providers charge you for every activation of a serverless function, every microsecond it runs and the memory resources it consumes. Costs can triple if there are too many functions or you neglect to calculate peak loads.
To prevent these issues, craft a solid cloud cost management strategy and monitor carefully. Watch your costs as your load increases, and make sure that curve won’t lead to any billing surprises.