在 Azure Cosmos DB 中优化预配的吞吐量成本Optimize provisioned throughput cost in Azure Cosmos DB

通过提供预配的吞吐量模型,Azure Cosmos DB 可在任何规模下提供可预测的性能。By offering provisioned throughput model, Azure Cosmos DB offers predictable performance at any scale. 预留或提前预配吞吐量消除了对性能造成的“干扰性邻居影响”。Reserving or provisioning throughput ahead of time eliminates the "noisy neighbor effect" on your performance. 只需指定所需的确切吞吐量,然后,Azure Cosmos DB 就能确保配置的吞吐量,并以 SLA 作为保障。You specify the exact amount of throughput you need and Azure Cosmos DB guarantees the configured throughput, backed by SLA.

可以从 400 RU/秒的最低吞吐量着手,今后可以扩展到每秒数千万个请求或更高的吞吐量。You can start with a minimum throughput of 400 RU/sec and scale up to tens of millions of requests per second or even more. 针对 Azure Cosmos 容器或数据库发出的每个请求,例如读取请求、写入请求、查询请求和存储过程,都会产生相应的成本,这些成本将从预配的吞吐量中扣减。Each request you issue against your Azure Cosmos container or database, such as a read request, write request, query request, stored procedures have a corresponding cost that is deducted from your provisioned throughput. 如果预配 400 RU/秒的吞吐量并发出成本为 40 RU 的查询,则每秒可以发出 10 个此类查询。If you provision 400 RU/s and issue a query that costs 40 RUs, you will be able to issue 10 such queries per second. 超过该限制的任何请求将受到速率限制,应重试该请求。Any request beyond that will get rate-limited and you should retry the request. 如果使用客户端驱动程序,它们支持自动重试逻辑。If you are using client drivers, they support the automatic retry logic.

可以针对数据库或容器预配吞吐量,每个策略可以帮助你根据方案节省成本。You can provision throughput on databases or containers and each strategy can help you save on costs depending on the scenario.

在不同的级别通过预配吞吐量进行优化Optimize by provisioning throughput at different levels

  • 如果针对某个数据库预配吞吐量,该数据库中的所有容器(例如集合/表/图形)可以基于负载共享该吞吐量。If you provision throughput on a database, all the containers, for example collections/tables/graphs within that database can share the throughput based on the load. 在数据库级别预留的吞吐量将会根据特定容器集上的工作负荷以不均匀的方式进行共享。Throughput reserved at the database level is shared unevenly, depending on the workload on a specific set of containers.

  • 如果针对某个容器预配吞吐量,则可以保证该容器的吞吐量,并提供 SLA 保障。If you provision throughput on a container, the throughput is guaranteed for that container, backed by the SLA. 所选的逻辑分区键对于在容器的所有逻辑分区之间均匀分配负载至关重要。The choice of a logical partition key is crucial for even distribution of load across all the logical partitions of a container. 有关更多详细信息,请参阅分区水平缩放文章。See Partitioning and horizontal scaling articles for more details.

下面是确定预配吞吐量策略时可以参考的一些指导原则:The following are some guidelines to decide on a provisioned throughput strategy:

对于以下情况,考虑针对 Azure Cosmos 数据库(包含一组容器)预配吞吐量Consider provisioning throughput on an Azure Cosmos database (containing a set of containers) if:

  1. 有几十个 Azure Cosmos 容器,并想要在部分或所有容器之间共享吞吐量。You have a few dozen Azure Cosmos containers and want to share throughput across some or all of them.

  2. 从专用于在 IaaS 托管的 VM 上运行或本地运行的单租户数据库(例如,NoSQL 数据库或关系数据库)迁移到 Azure Cosmos DB。You are migrating from a single-tenant database designed to run on IaaS-hosted VMs or on-premises, for example, NoSQL or relational databases to Azure Cosmos DB. 有许多集合/表/图形,并且不想要对数据模型进行任何更改。And if you have many collections/tables/graphs and you do not want to make any changes to your data model. 请注意,如果在从本地数据库迁移时不更新数据模型,可能需要牺牲 Azure Cosmos DB 提供的一些优势。Note, you might have to compromise some of the benefits offered by Azure Cosmos DB if you are not updating your data model when migrating from an on-premises database. 建议始终重新访问数据模型,以获得最大性能并优化成本。It's recommended that you always reaccess your data model to get the most in terms of performance and also to optimize for costs.

  3. 想要在数据库级别利用入池吞吐量,来缓解容易出现意外高峰的工作负荷中的计划外高峰。You want to absorb unplanned spikes in workloads by virtue of pooled throughput at the database level subjected to unexpected spike in workload.

  4. 不针对单个容器设置特定的吞吐量,而是考虑如何在数据库中的一组容器之间获得聚合吞吐量。Instead of setting specific throughput on individual containers, you care about getting the aggregate throughput across a set of containers within the database.

对于以下情况,考虑针对单个容器预配吞吐量:Consider provisioning throughput on an individual container if:

  1. Azure Cosmos 容器较少。You have a few Azure Cosmos containers. 由于 Azure Cosmos DB 不区分架构,容器可以包含采用异构架构的项,无需客户创建多个容器类型 - 为每个实体各创建一个。Because Azure Cosmos DB is schema-agnostic, a container can contain items that have heterogeneous schemas and does not require customers to create multiple container types, one for each entity. 如果将 10 到 20 个容器划分到单个容器有利,则始终适合考虑这种做法。It is always an option to consider if grouping separate say 10-20 containers into a single container makes sense. 如果容器的最低吞吐量为 400 RU,则将所有 10 到 20 个容器组建成一个池可能更具成本效益。With a 400 RUs minimum for containers, pooling all 10-20 containers into one could be more cost effective.

  2. 想要控制特定容器的吞吐量,并使给定的容器获得有保证的、且有 SLA 作为保障的吞吐量。You want to control the throughput on a specific container and get the guaranteed throughput on a given container backed by SLA.

考虑混合上面两种策略:Consider a hybrid of the above two strategies:

  1. 如前所述,Azure Cosmos DB 允许混合搭配上述两种策略,因此,现在可以在 Azure Cosmos 数据库中创建一些容器(它们可以共享针对数据库预配的吞吐量),并可以在同一数据库中创建一些容器(它们可以获得专用的预配吞吐量)。As mentioned earlier, Azure Cosmos DB allows you to mix and match the above two strategies, so you can now have some containers within Azure Cosmos database, which may share the throughput provisioned on the database as well as, some containers within the same database, which may have dedicated amounts of provisioned throughput.

  2. 可对混合配置应用上述策略,其中,可在两个数据库级别提供配置吞吐量,某些容器具有专用的吞吐量。You can apply the above strategies to come up with a hybrid configuration, where you have both database level provisioned throughput with some containers having dedicated throughput.

如下表中所示,根据所选的 API,可以不同的粒度预配吞吐量。As shown in the following table, depending on the choice of API, you can provision throughput at different granularities.

APIAPI 对于共享吞吐量,请配置For shared throughput, configure 对于专用吞吐量,请配置For dedicated throughput, configure
SQL APISQL API 数据库Database 容器Container
Azure Cosmos DB 的用于 MongoDB 的 APIAzure Cosmos DB's API for MongoDB 数据库Database 集合Collection
Cassandra APICassandra API 密钥空间Keyspace Table
Gremlin APIGremlin API 数据库帐户Database account GraphGraph
表 APITable API 数据库帐户Database account Table

在不同的级别预配吞吐量可以根据工作负荷特征优化成本。By provisioning throughput at different levels, you can optimize your costs based on the characteristics of your workload. 如前所述,随时能够以编程方式针对单个容器或者同时针对一组容器增加或减少预配的吞吐量。As mentioned earlier, you can programmatically and at any time increase or decrease your provisioned throughput for either individual container(s) or collectively across a set of containers. 根据工作负荷的变化弹性缩放吞吐量,只需为配置的吞吐量付费。By elastically scaling throughput as your workload changes, you only pay for the throughput that you have configured. 如果单个容器或一组容器分布在多个区域之间,则所有区域都能保证提供针对这些容器配置的吞吐量。If your container or a set of containers is distributed across multiple regions, then the throughput you configure on the container or a set of containers is guaranteed to be made available across all regions.

使用请求的速率限制进行优化Optimize with rate-limiting your requests

对于不易受延迟影响的工作负荷,可以预配更低的吞吐量,并在实际吞吐量超过预配的吞吐量时,让应用程序处理速率限制。For workloads that aren't sensitive to latency, you can provision less throughput and let the application handle rate-limiting when the actual throughput exceeds the provisioned throughput. 服务器将抢先结束出现 RequestRateTooLarge(HTTP 状态代码 429)的请求并返回 x-ms-retry-after-ms 标头,该标头指示重试请求之前用户必须等待的时间长度(以毫秒为单位)。The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429) and return the x-ms-retry-after-ms header indicating the amount of time, in milliseconds, that the user must wait before retrying the request.

HTTP Status 429, 
 Status Line: RequestRateTooLarge 
 x-ms-retry-after-ms :100

SDK 中的重试逻辑Retry logic in SDKs

本机 SDK(.NET/.NET Core、Java、Node.js 和 Python)隐式捕获此响应,遵循服务器指定的 retry-after 标头,并重试请求。The native SDKs (.NET/.NET Core, Java, Node.js and Python) implicitly catch this response, respect the server-specified retry-after header, and retry the request. 除非多个客户端同时访问你的帐户,否则下次重试将会成功。Unless your account is accessed concurrently by multiple clients, the next retry will succeed.

如果累计有多个客户端一贯在超过请求速率的情况下运行,则当前设置为 9 的默认重试计数可能并不足够。If you have more than one client cumulatively operating consistently above the request rate, the default retry count currently that is currently set to 9 may not be sufficient. 在这种情况下,客户端会向应用程序引发 DocumentClientException 并返回状态代码 429。In such case, the client throws a DocumentClientException with status code 429 to the application. 可以通过在 ConnectionPolicy 实例上设置 RetryOptions 来更改默认重试计数。The default retry count can be changed by setting the RetryOptions on the ConnectionPolicy instance. 默认情况下,如果请求继续以高于请求速率的方式运行,则在 30 秒的累积等待时间后返回状态代码为 429 的 DocumentClientExceptionBy default, the DocumentClientException with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. 即使当前的重试计数小于最大重试计数(默认值 9 或用户定义的值),也会发生这种情况。This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value.

MaxRetryAttemptsOnThrottledRequests 设置为 3,因此在这种情况下,如果请求操作因超出容器的保留吞吐量而受到速率限制,则请求操作会重试三次,然后再向应用程序引发异常。MaxRetryAttemptsOnThrottledRequests is set to 3, so in this case, if a request operation is rate limited by exceeding the reserved throughput for the container, the request operation retries three times before throwing the exception to the application. MaxRetryWaitTimeInSeconds 设置为 60,因此在这种情况下,如果自第一次请求以来的累积重试等待时间(以秒为单位)超过 60 秒,则会引发异常。MaxRetryWaitTimeInSeconds is set to 60, so in this case if the cumulative retry wait time in seconds since the first request exceeds 60 seconds, the exception is thrown.

ConnectionPolicy connectionPolicy = new ConnectionPolicy(); 
connectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 3; 
connectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 60;

分区策略和预配的吞吐量成本Partitioning strategy and provisioned throughput costs

良好的分区策略对于优化 Azure Cosmos DB 中的成本非常重要。Good partitioning strategy is important to optimize costs in Azure Cosmos DB. 确保分区不存在偏斜,这可以通过存储指标来观察。Ensure that there is no skew of partitions, which are exposed through storage metrics. 确保分区的吞吐量不存在偏斜,这可以通过吞吐量指标来观察。Ensure that there is no skew of throughput for a partition, which is exposed with throughput metrics. 确保特定的分区键不存在偏斜。Ensure that there is no skew towards particular partition keys. 存储中的主导键通过指标公开,但该键取决于应用程序访问模式。Dominant keys in storage are exposed through metrics but the key will be dependent on your application access pattern. 最好是考虑适当的逻辑分区键。It's best to think about the right logical partition key. 良好的分区键预期具有以下特征:A good partition key is expected to have the following characteristics:

  • 选择可以持续在所有分区之间均匀分散工作负荷的分区键。Choose a partition key that spreads workload evenly across all partitions and evenly over time. 换而言之,不应该将某些键用于大部分数据,同时将某些键用于少部分数据或者不用于任何数据。In other words, you shouldn't have some keys to with majority of the data and some keys with less or no data.

  • 选择可在逻辑分区之间均匀分散访问模式的分区键。Choose a partition key that enables access patterns to be evenly spread across logical partitions. 工作负荷在所有键之间合理均衡分布。The workload is reasonably even across all the keys. 换而言之,大部分工作负荷不应注重于少数几个特定键。In other words, the majority of the workload shouldn't be focused on a few specific keys.

  • 选择具有宽广值范围的分区键。Choose a partition key that has a wide range of values.

基本理念是在逻辑分区集之间分散容器中的数据和活动,以便可以在逻辑分区之间分配数据存储和吞吐量的资源。The basic idea is to spread the data and the activity in your container across the set of logical partitions, so that resources for data storage and throughput can be distributed across the logical partitions. 分区键的候选项可能包括,经常在查询中显示为筛选器的属性。Candidates for partition keys may include the properties that appear frequently as a filter in your queries. 通过在筛选器谓词中包含分区键,可以有效地路由查询。Queries can be efficiently routed by including the partition key in the filter predicate. 使用此类分区策略可以大大简化预配吞吐量的优化。With such a partitioning strategy, optimizing provisioned throughput will be a lot easier.

设计较小的项以提高吞吐量Design smaller items for higher throughput

给定操作的请求费用或请求处理成本与项的大小直接相关。The request charge or the request processing cost of a given operation is directly correlated to the size of the item. 针对大项执行操作的成本比针对小项执行操作的成本要高一些。Operations on large items will cost more than operations on smaller items.

数据访问模式Data access patterns

根据访问数据的频率以逻辑方式将数据划分成逻辑类别始终是良好的做法。It is always a good practice to logically separate your data into logical categories based on how frequently you access the data. 将数据分类成热数据、中性数据或冷数据,可以微调消耗的存储和所需的吞吐量。By categorizing it as hot, medium, or cold data you can fine-tune the storage consumed and the throughput required. 根据访问频率,可将数据放入单独的容器(例如表、图形和集合),并微调其预配的吞吐量,以适应数据分段的需要。Depending on the frequency of access, you can place the data into separate containers (for example, tables, graphs, and collections) and fine-tune the provisioned throughput on them to accommodate to the needs of that segment of data.

此外,如果你正在使用 Azure Cosmos DB,并知道你不会按特定的数据值进行搜索或者很少访问这些值,则应存储这些属性的压缩值。Furthermore, if you're using Azure Cosmos DB, and you know you are not going to search by certain data values or will rarely access them, you should store the compressed values of these attributes. 使用此方法可以节省存储空间、索引空间和预配的吞吐量,从而较低成本。With this method you save on storage space, index space, and provisioned throughput and result in lower costs.

通过更改索引策略进行优化Optimize by changing indexing policy

默认情况下,Azure Cosmos DB 自动为每条记录的每个属性编制索引。By default, Azure Cosmos DB automatically indexes every property of every record. 这是为了简化开发,并确保跨许多不同类型的即席查询具有优异的性能。This is intended to ease development and ensure excellent performance across many different types of ad hoc queries. 如果你的大型记录包含数千个属性,购买吞吐量来为每个属性编制索引的做法可能并不有效,尤其是只针对其中的 10 个或 20 个属性运行查询时。If you have large records with thousands of properties, paying the throughput cost for indexing every property may not be useful, especially if you only query against 10 or 20 of those properties. 在接近于能够控制特定的工作负荷时,我们的指导原则是优化索引策略。As you get closer to getting a handle on your specific workload, our guidance is to tune your index policy. 此处可以找到有关 Azure Cosmos DB 索引策略的完整详细信息。Full details on Azure Cosmos DB indexing policy can be found here.

监视预配的吞吐量和消耗的吞吐量Monitoring provisioned and consumed throughput

可以在 Azure 门户中监视预配的 RU 总数、受速率限制的请求数,以及已消耗的 RU 数。You can monitor the total number of RUs provisioned, number of rate-limited requests as well as the number of RUs you've consumed in the Azure portal. 下图显示了示例用量指标:The following image shows an example usage metric:

在 Azure 门户中监视请求单位数

还可以设置警报,以检查受速率限制的请求数是否超过了特定阈值。You can also set alerts to check if the number of rate-limited requests exceeds a specific threshold. 有关更多详细信息,请参阅如何监视 Azure Cosmos DB 一文。See How to monitor Azure Cosmos DB article for more details. 这些警报可以向帐户管理员发送电子邮件,或者调用自定义 HTTP Webhook 或 Azure 函数来自动增加预配的吞吐量。These alerts can send an email to the account administrators or call a custom HTTP Webhook or an Azure Function to automatically increase provisioned throughput.

按需弹性缩放吞吐量Scale your throughput elastically and on-demand

由于你需要为预配的吞吐量付费,因此,使预配的吞吐量符合需要有助于避免未使用的吞吐量产生费用。Since you are billed for the throughput provisioned, matching the provisioned throughput to your needs can help you avoid the charges for the unused throughput. 随时可以按需扩展或缩减预配的吞吐量。You can scale your provisioned throughput up or down any time, as needed.

  • 监视 RU 消耗量以及受速率限制的请求比率可以发现,在整个一天或一周中,并不需要保持恒定的预配吞吐量。Monitoring the consumption of your RUs and the ratio of rate-limited requests may reveal that you do not need to keep provisioned throughout constant throughout the day or the week. 在晚上或周末,收到的流量可能更少。You may receive less traffic at night or during the weekend. 使用 Azure 门户或者 Azure Cosmos DB 本机 SDK 或 REST API 可以随时缩放预配的吞吐量。By using either Azure portal or Azure Cosmos DB native SDKs or REST API, you can scale your provisioned throughput at any time. Azure Cosmos DB 的 REST API 提供终结点用于以编程方式更新容器的性能级别,以及根据日期时间或者星期日期直接在代码中调整吞吐量。Azure Cosmos DB's REST API provides endpoints to programmatically update the performance level of your containers making it straightforward to adjust the throughput from your code depending on the time of the day or the day of the week. 执行此操作不会造成任何停机,通常在一分钟内就能产生效果。The operation is performed without any downtime, and typically takes effect in less than a minute.

  • 举例而言,应该缩放吞吐量的场合之一是在数据迁移过程中将数据引入 Azure Cosmos DB。One of the areas you should scale throughput is when you ingest data into Azure Cosmos DB, for example, during data migration. 完成迁移后,可以缩减预配的吞吐量以处理解决方案的稳定状态。Once you have completed the migration, you can scale provisioned throughput down to handle the solution's steady state.

  • 请记住,将按一小时粒度计费,因此,如果以高于一小时的频率更改预配的吞吐量,则无法节省成本。Remember, the billing is at the granularity of one hour, so you will not save any money if you change your provisioned throughput more often than one hour at a time.

确定新工作负荷所需的吞吐量Determine the throughput needed for a new workload

若要确定新工作负荷的预配吞吐量,可使用以下步骤:To determine the provisioned throughput for a new workload, you can use the following steps:

  1. 使用容量规划器执行初始的粗略评估,并借助 Azure 门户中的 Azure Cosmos Explorer 调整估算值。Perform an initial, rough evaluation using the capacity planner and adjust your estimates with the help of the Azure Cosmos Explorer in the Azure portal.

  2. 建议使用超过预期的吞吐量创建容器,然后根据需要进行缩减。It's recommended to create the containers with higher throughput than expected and then scaling down as needed.

  3. 建议使用某个本机 Azure Cosmos DB SDK,以便在请求受到速率限制时利用自动重试。It's recommended to use one of the native Azure Cosmos DB SDKs to benefit from automatic retries when requests get rate-limited. 如果在不受支持的平台上操作,同时使用了 Cosmos DB 的 REST API,请使用 x-ms-retry-after-ms 标头实施自己的重试策略。If you're working on a platform that is not supported and use Cosmos DB's REST API, implement your own retry policy using the x-ms-retry-after-ms header.

  4. 确保应用程序代码正常支持所有重试都失败的情况。Make sure that your application code gracefully supports the case when all retries fail.

  5. 可以在 Azure 门户中配置警报,以便在发生速率限制时收到通知。You can configure alerts from the Azure portal to get notifications for rate-limiting. 一开始可以采用保守的限制,例如,过去 15 分钟有 10 个请求受到速率限制,然后在测算出实际消耗量之后,改用更激进的规则。You can start with conservative limits like 10 rate-limited requests over the last 15 minutes and switch to more eager rules once you figure out your actual consumption. 偶尔发生速率限制是正常的,它们表示你正在演练设置的限制,而这完全符合你的意图。Occasional rate-limits are fine, they show that you're playing with the limits you've set and that's exactly what you want to do.

  6. 使用监视来了解流量模式,以便可以考虑是否需要动态调整一天或一周的吞吐量预配。Use monitoring to understand your traffic pattern, so you can consider the need to dynamically adjust your throughput provisioning over the day or a week.

  7. 定期监视预配吞吐量与消耗吞吐量的比率,确保预配的容器和数据库不会超过所需的数目。Monitor your provisioned vs. consumed throughput ratio regularly to make sure you have not provisioned more than required number of containers and databases. 稍微超过预配的吞吐量是一种良好的安全措施。Having a little over provisioned throughput is a good safety check.

有关优化预配吞吐量的最佳做法Best practices to optimize provisioned throughput

以下步骤可帮助你在使用 Azure Cosmos DB 时保持解决方案的高度可伸缩性和经济效益。The following steps help you to make your solutions highly scalable and cost-effective when using Azure Cosmos DB.

  1. 如果容器和数据库明显超过预配的吞吐量,应该检查预配的 RU 和消耗的 RU,并微调工作负荷。If you have significantly over provisioned throughput across containers and databases, you should review RUs provisioned Vs consumed RUs and fine-tune the workloads.

  2. 有一种方法可以估算应用程序所需的预留吞吐量:在针对应用程序所用的代表性 Azure Cosmos 容器或数据库运行典型操作时,记录与之相关的请求单位 (RU) 费用,然后估算预计每秒会执行的操作数目。One method for estimating the amount of reserved throughput required by your application is to record the request unit RU charge associated with running typical operations against a representative Azure Cosmos container or database used by your application and then estimate the number of operations you anticipate to perform each second. 同时请务必测量并包含典型查询及其用量。Be sure to measure and include typical queries and their usage as well. 若要了解如何以编程方式或使用门户估算查询的 RU 成本,请参阅优化查询成本To learn how to estimate RU costs of queries programmatically or using portal see Optimizing the cost of queries.

  3. 获取操作及其 RU 成本的另一种方法是启用 Azure Monitor 日志,它会提供操作/持续时间的细目以及请求费用。Another way to get operations and their costs in RUs is by enabling Azure Monitor logs, which will give you the breakdown of operation/duration and the request charge. Azure Cosmos DB 提供每个操作的请求费用,因此,可以存储响应中的每笔操作费用,然后将其用于分析。Azure Cosmos DB provides request charge for every operation, so every operation charge can be stored back from the response and then used for analysis.

  4. 可按需弹性扩展和缩减预配的吞吐量,以适应工作负荷的需要。You can elastically scale up and down provisioned throughput as you need to accommodate your workload needs.

  5. 可按需添加和删除与 Azure Cosmos 帐户关联的区域,并控制成本。You can add and remove regions associated with your Azure Cosmos account as you need and control costs.

  6. 确保在容器的逻辑分区之间均匀分布数据和工作负荷。Make sure you have even distribution of data and workloads across logical partitions of your containers. 如果分区分布不均,可能会导致预配的吞吐量高于所需的值。If you have uneven partition distribution, this may cause to provision higher amount of throughput than the value that is needed. 如果你发现分布有偏斜,我们建议在分区之间均匀重新分布工作负荷,或者将数据重新分区。If you identify that you have a skewed distribution, we recommend redistributing the workload evenly across the partitions or repartition the data.

  7. 如果有多个容器并且这些容器不需要 SLA,则对于每个容器吞吐量 SLA 不适用的情况,可以使用基于数据库的产品/服务。If you have many containers and these containers do not require SLAs, you can use the database-based offer for the cases where the per container throughput SLAs do not apply. 应确定要将哪些 Azure Cosmos 容器迁移到数据库级吞吐量产品/服务,然后使用基于更改源的解决方案将其迁移。You should identify which of the Azure Cosmos containers you want to migrate to the database level throughput offer and then migrate them by using a change feed-based solution.

  8. 考虑对开发/测试方案使用可下载的 Cosmos DB 模拟器。Consider using the downloadable Cosmos DB emulator for dev/test scenarios. 将这些选项用于开发/测试可以明显降低成本。By using these options for test-dev, you can substantially lower your costs.

  9. 可以进一步执行工作负荷特定的成本优化 - 例如,增加批大小、对跨多个区域的读取操作进行负载均衡,以及删除重复数据(如果适用)。You can further perform workload-specific cost optimizations - for example, increasing batch-size, load-balancing reads across multiple regions, and de-duplicating data, if applicable.

后续步骤Next steps

接下来,可通过以下文章详细了解 Azure Cosmos DB 中的成本优化:Next you can proceed to learn more about cost optimization in Azure Cosmos DB with the following articles: