Estimating consumption-based costs

This article shows you how to estimate plan costs for the Flex Consumption and Consumption hosting plans.

Azure Functions currently offers these different hosting options for your function apps, with each option having its own hosting plan pricing model:

Plan Description
Flex Consumption plan You pay for execution time on the instances on which your functions are running, plus any always ready instances. Instances are dynamically added and removed based on the number of incoming events. This is the recommended dynamic scale plan, which also supports virtual network integration.
Premium Provides you with the same features and scaling mechanism as the Consumption plan, but with enhanced performance and virtual network integration. Cost is based on your chosen pricing tier. To learn more, see Azure Functions Premium plan.
Dedicated (App Service)
(basic tier or higher)
When you need to run in dedicated VMs or in isolation, use custom images, or want to use your excess App Service plan capacity. Uses regular App Service plan billing. Cost is based on your chosen pricing tier.

| Consumption | You're only charged for the time that your function app runs. This plan includes a free grant on a per subscription basis.|

You should always choose the option that best supports the feature, performance, and cost requirements for your function executions. To learn more, see Azure Functions scale and hosting.

This article focuses on Flex Consumption and Consumption plans because in these plans billing depends on active periods of executions inside each instance.

Durable Functions can also run in both of these plans. To learn more about the cost considerations when using Durable Functions, see Durable Functions billing.

Consumption-based costs

The way that consumption-based costs are calculated, including free grants, depends on the specific plan. For the most current cost and grant information, see the Azure Functions pricing page.

There are two modes by which your costs are determined when running your apps in the Flex Consumption plan. Each mode is determined on a per-instance basis.

Billing mode Description
On Demand When running in on demand mode, you are billed only for the amount of time your function code is executing on your available instances. In on demand mode, no minimum instance count is required. You're billed for:

• The total amount of memory provisioned while each on demand instance is actively executing functions (in GB-seconds), minus a free grant of GB-s per month.
• The total number of executions, minus a free grant (number) of executions per month.
Always ready You can configure one or more instances, assigned to specific trigger types (HTTP/Durable/Blob) and individual functions, that are always available to be able handle requests. When you have any always ready instances enabled, you're billed for:

• The total amount of memory provisioned across all of your always ready instances, known as the baseline (in GB-seconds).
• The total amount of memory provisioned during the time each always ready instance is actively executing functions (in GB-seconds).
• The total number of executions.

In always ready billing, there are no free grants.

This diagram represents how on-demand costs are determined in this plan:

Graph of Flex Consumption plan on-demand costs based on both load (instance count) and time.

In addition to execution time, when using one or more always ready instances, you're also billed at a lower, baseline rate for the number of always ready instances you maintain. Execution time for always ready instances might be cheaper than execution time on instances with on demand execution.

Important

In this article, on-demand pricing is used to help understand example calculations. Always check the current costs in the Azure Functions pricing page when estimating costs you might incur while running your functions in the Flex Consumption plan.

Consider a function app that is comprised only of HTTP triggers with and these basic facts:

  • HTTP triggers handle 40 constant requests per second.
  • HTTP triggers handle 10 concurrent requests.
  • The instance memory size setting is 2048 MB.
  • There are no always ready instances configured, which means the app can scale to zero.

In a situation like this, the pricing depends more on the kind of work being done during code execution. Let's look at two workload scenarios:

  • CPU-bound workload: In a CPU-bound workload, there's no advantage to processing multiple requests in parallel in the same instance. This means that you're better off distributing each request to its own instance so requests complete as a quickly as possible without contention. In this scenario, you should set a low HTTP trigger concurrency of 1. With 10 concurrent requests, the app scales to a steady state of roughly 10 instances, and each instance is continuously active processing one request at a time.

    Because the size of each instance is ~2 GB, the consumption for a single continuously active instance is 2 GB * 3600 s = 7200 GB-s, which at the assumed on-demand execution rate (without any free grants applied) is $0.1152 USD per hour per instance. Because the CPU-bound app is scaled to 10 instance, the total hourly rate for execution time is $1.152 USD.

    Similarly, the on-demand per-execution charge (without any free grants) of 40 requests per second is equal to 40 * 3600 = 144,000 or 0.144 million executions per hour. The total (grant-free) hourly cost of executions is then 0.144 * $0.20, which is $0.0288 per hour.

    In this scenario, the total hourly cost of running on-demand on 10 instances is $1.152 + $0.0288 = $1.1808 USD.

  • IO bound workload: In an IO-bound workload, most of the application time is spent waiting on incoming request, which might be limited by network throughput or other upstream factors. Because of the limited inputs, the code can process multiple operations concurrently without negative impacts. In this scenario, assume you can process all 10 concurrent requests on the same instance.

    Because consumption charges are based only on the memory of each active instance, the consumption charge calculation is simply 2 GB * 3600 s = 7200 GB-s, which at the assumed on-demand execution rate (without any free grants applied) is $0.1152 USD per hour for the single instance.

    As in the CPU-bound scenario, the on-demand per-execution charge (without any free grants) of 40 requests per second is equal to 40 * 3600 = 144,000 or 0.144 million executions per hour. In this case, the total (grant-free) hourly cost of executions 0.144 * $0.20, which is $0.0288 per hour.

    In this scenario, the total hourly cost of running on-demand on a single instance is $0.1152 + $0.0288 = $0.144 USD.

When estimating the overall cost of running your functions in any plan, remember that the Functions runtime uses several other Azure services, which are each billed separately. When you estimate pricing for function apps, any triggers and bindings you have that integrate with other Azure services require you to create and pay for those other services.

For functions running in a Consumption plan, the total cost is the execution cost of your functions, plus the cost of bandwidth and other services.

When estimating the overall costs of your function app and related services, use the Azure pricing calculator.

Related cost Description
Storage account Each function app requires that you have an associated General Purpose Azure Storage account, which is billed separately. This account is used internally by the Functions runtime, but you can also use it for Storage triggers and bindings. If you don't have a storage account, one is created for you when the function app is created. To learn more, see Storage account requirements.
Application Insights Functions relies on Application Insights to provide a high-performance monitoring experience for your function apps. While not required, you should enable Application Insights integration. A free grant of telemetry data is included every month. To learn more, see the Azure Monitor pricing page.
Network bandwidth You can incur costs for data transfer depending on the direction and scenario of the data movement. To learn more, see Bandwidth pricing details.

Behaviors affecting execution time

The following behaviors of your functions can affect the execution time:

  • Triggers and bindings: The time taken to read input from and write output to your function bindings is counted as execution time. For example, when your function uses an output binding to write a message to an Azure storage queue, your execution time includes the time taken to write the message to the queue, which is included in the calculation of the function cost.

  • Asynchronous execution: The time that your function waits for the results of an async request (await in C#) is counted as execution time. The GB-second calculation is based on the start and end time of the function and the memory usage over that period. What is happening over that time in terms of CPU activity isn't factored into the calculation. You might be able to reduce costs during asynchronous operations by using Durable Functions. You're not billed for time spent at awaits in orchestrator functions.

Function app-level metrics

To better understand the costs of your functions, you can use Azure Monitor to view cost-related metrics currently being generated by your function apps.

Use Azure Monitor metrics explorer to view cost-related data for your Consumption plan function apps in a graphical format.

  1. In the Azure portal, navigate to your function app.

  2. In the left panel, scroll down to Monitoring and choose Metrics.

  3. From Metric, choose Function Execution Count and Sum for Aggregation. This adds the sum of the execution counts during chosen period to the chart.

    Define a functions app metric to add to the chart

  4. Select Add metric and repeat steps 2-4 to add Function Execution Units to the chart.

The resulting chart contains the totals for both execution metrics in the chosen time range, which in this case is two hours.

Graph of function execution counts and execution units

As the number of execution units is so much greater than the execution count, the chart just shows execution units.

This chart shows a total of 1.11 billion Function Execution Units consumed in a two-hour period, measured in MB-milliseconds. To convert to GB-seconds, divide by 1024000. In this example, the function app consumed 1110000000 / 1024000 = 1083.98 GB-seconds. You can take this value and multiply by the current price of execution time on the Functions pricing page, which gives you the cost of these two hours, assuming you've already used any free grants of execution time.

Function-level metrics

Function execution units are a combination of execution time and your memory usage, which makes it a difficult metric for understanding memory usage. Memory data isn't a metric currently available through Azure Monitor. However, if you want to optimize the memory usage of your app, can use the performance counter data collected by Application Insights.

If you haven't already done so, enable Application Insights in your function app. With this integration enabled, you can query this telemetry data in the portal.

You can use either Azure Monitor metrics explorer in the Azure portal or REST APIs to get Monitor Metrics data.

Determine memory usage

Under Monitoring, select Logs (Analytics), then copy the following telemetry query and paste it into the query window and select Run. This query returns the total memory usage at each sampled time.

performanceCounters
| where name == "Private Bytes"
| project timestamp, name, value

The results look like the following example:

timestamp [UTC] name value
9/12/2019, 1:05:14.947 AM Private Bytes 209,932,288
9/12/2019, 1:06:14.994 AM Private Bytes 212,189,184
9/12/2019, 1:06:30.010 AM Private Bytes 231,714,816
9/12/2019, 1:07:15.040 AM Private Bytes 210,591,744
9/12/2019, 1:12:16.285 AM Private Bytes 216,285,184
9/12/2019, 1:12:31.376 AM Private Bytes 235,806,720

Determine duration

Azure Monitor tracks metrics at the resource level, which for Functions is the function app. Application Insights integration emits metrics on a per-function basis. Here's an example analytics query to get the average duration of a function:

customMetrics
| where name contains "Duration"
| extend averageDuration = valueSum / valueCount
| summarize averageDurationMilliseconds=avg(averageDuration) by name
name averageDurationMilliseconds
QueueTrigger AvgDurationMs 16.087
QueueTrigger MaxDurationMs 90.249
QueueTrigger MinDurationMs 8.522

Next steps