有关 Azure Cosmos DB 中不同 API 的常见问题Frequently asked questions about different APIs in Azure Cosmos DB

Azure Cosmos DB 的典型用例有哪些?What are the typical use cases for Azure Cosmos DB?

对于侧重于以下要求的新 Web、移动、游戏和 IoT 应用程序而言,Azure Cosmos DB 是一个不错的选择:自动缩放、可预测的性能、毫秒响应时间的快速排序,以及查询无架构数据的能力。Azure Cosmos DB is a good choice for new web, mobile, gaming, and IoT applications where automatic scale, predictable performance, fast order of millisecond response times, and the ability to query over schema-free data is important. Azure Cosmos DB 有助于快速开发,且支持应用程序数据模型的连续迭代。Azure Cosmos DB lends itself to rapid development and supporting the continuous iteration of application data models. 用于管理用户生成的内容和数据的应用程序就是 Azure Cosmos DB 的常见用例Applications that manage user-generated content and data are common use cases for Azure Cosmos DB.

Azure Cosmos DB 如何提供可预测的性能?How does Azure Cosmos DB offer predictable performance?

请求单位 (RU) 是 Azure Cosmos DB 中吞吐量的衡量单位。A request unit (RU) is the measure of throughput in Azure Cosmos DB. 1RU 吞吐量相当于通过 GET 操作获取 1 KB 文档的吞吐量。A 1RU throughput corresponds to the throughput of the GET of a 1-KB document. 在 Azure Cosmos DB 中进行的每个操作(包括读、写、SQL 查询和执行存储过程)都具有一个确定的 RU 值,该值基于完成该操作所需的吞吐量。Every operation in Azure Cosmos DB, including reads, writes, SQL queries, and stored procedure executions, has a deterministic RU value that's based on the throughput required to complete the operation. 无需考虑 CPU、IO 和内存,以及它们会怎样影响应用程序吞吐量,只需从单个 RU 度量值的角度进行考虑即可。Instead of thinking about CPU, IO, and memory and how they each affect your application throughput, you can think in terms of a single RU measure.

可以按照每秒吞吐量的 RU 数来配置每个 Azure Cosmos 容器的预配吞吐量。You can configure each Azure Cosmos container with provisioned throughput in terms of RUs of throughput per second. 对于任何规模的应用程序,都可以通过将单个请求设为基准来测量其 RU 值,并通过预配容器来处理所有请求的请求单位总和。For applications of any scale, you can benchmark individual requests to measure their RU values, and provision a container to handle the total of request units across all requests. 也可以随着应用程序的发展需求,扩展或缩减容器的吞吐量。You can also scale up or scale down your container's throughput as the needs of your application evolve. 如需请求单位的详细信息以及帮助确定容器需求,请尝试使用吞吐量计算器For more information about request units and for help with determining your container needs, try the throughput calculator.

Azure Cosmos DB 如何支持各种数据模型(例如键/值、纵栏表、文档和图形)?How does Azure Cosmos DB support various data models such as key/value, columnar, document, and graph?

键/值(表)、纵栏表、文档和图形数据模型都是原本就支持的,因为 Azure Cosmos DB 是基于 ARS(原子、记录和序列)设计构建的。Key/value (table), columnar, document, and graph data models are all natively supported because of the ARS (atoms, records, and sequences) design that Azure Cosmos DB is built on. 原子、记录和序列可以轻松映射和投影到各种数据模型。Atoms, records, and sequences can be easily mapped and projected to various data models. 现在提供了适用于一个模型子集的 API(SQL、MongoDB、表和 Gremlin),并且将来会提供特定于其他数据模型的其他 API。The APIs for a subset of models are available right now (SQL, MongoDB, Table, and Gremlin) and others specific to additional data models will be available in the future.

Azure Cosmos DB 有一个不受架构影响的索引编制引擎,能够自动为它引入的所有数据编制索引,不会要求开发者提供任何架构或辅助索引。Azure Cosmos DB has a schema agnostic indexing engine capable of automatically indexing all the data it ingests without requiring any schema or secondary indexes from the developer. 该引擎依赖于一组逻辑索引布局(倒置、纵栏表、树形),这些布局将存储布局从索引和查询处理子系统中分离出来。The engine relies on a set of logical index layouts (inverted, columnar, tree) which decouple the storage layout from the index and query processing subsystems. Cosmos DB 还能够以可扩展方式支持一组连网协议和 API 并将它们高效地转换为核心数据模型 (1) 和逻辑索引布局 (2),这使得它具有原本就支持多个数据模型的独特功能。Cosmos DB also has the ability to support a set of wire protocols and APIs in an extensible manner and translate them efficiently to the core data model (1) and the logical index layouts (2) making it uniquely capable of supporting more than one data model natively.

是否可以使用多个 API 来访问我的数据?Can I use multiple APIs to access my data?

Azure Cosmos DB 是世纪互联提供的多区域分布式多模型数据库服务。Azure Cosmos DB is 21Vianet's multiple-regionally distributed, multi-model database service. 其中多模型意味着 Azure Cosmos DB 支持多个 API 和多个数据模型,不同的 API 将不同的数据格式用于存储和线路协议。Where multi-model means Azure Cosmos DB supports multiple APIs and multiple data models, different APIs use different data formats for storage and wire protocol. 例如,SQL 使用 JSON,MongoDB 使用 BSON,表使用 EDM,Cassandra 使用 CQL,Gremlin 使用 GraphSON。For example, SQL uses JSON, MongoDB uses BSON, Table uses EDM, Cassandra uses CQL, Gremlin uses GraphSON. 因此,建议使用相同的 API 对给定帐户中的数据进行所有访问。As a result, we recommend using the same API for all access to the data in a given account.

除了可互操作的 Gremlin 和 SQL API,每个 API 都独立运行。Each API operates independently, except the Gremlin and SQL API, which are interoperable.

Azure Cosmos DB 是否符合 HIPAA?Is Azure Cosmos DB HIPAA compliant?

是的,Azure Cosmos DB 符合 HIPAA。Yes, Azure Cosmos DB is HIPAA-compliant. HIPAA 针对可识别个人身份的健康信息的使用、泄露与保护制定了要求。HIPAA establishes requirements for the use, disclosure, and safeguarding of individually identifiable health information. 有关详细信息,请转到 Azure 信任中心For more information, see the Azure Trust Center.

Azure Cosmos DB 的存储限制是什么?What are the storage limits of Azure Cosmos DB?

对于容器可以存储在 Azure Cosmos DB 中的数据总量并没有任何限制。There's no limit to the total amount of data that a container can store in Azure Cosmos DB.

Azure Cosmos DB 的吞吐量限制是什么?What are the throughput limits of Azure Cosmos DB?

对于 Azure Cosmos DB 中容器可以支持的吞吐量总量并没有任何限制。There's no limit to the total amount of throughput that a container can support in Azure Cosmos DB. 关键是要让工作负荷大致均匀地分布到足够多的分区键中。The key idea is to distribute your workload roughly evenly among a sufficiently large number of partition keys.

直接连接和网关连接模式是否加密?Are Direct and Gateway connectivity modes encrypted?

是的,两种模式始终完全加密。Yes both modes are always fully encrypted.

Azure Cosmos DB 的费用如何?How much does Azure Cosmos DB cost?

有关详细信息,请参阅 Azure Cosmos DB 定价详细信息页。For details, refer to the Azure Cosmos DB pricing details page. Azure Cosmos DB 使用费取决于预配的容器数、容器的联机小时数,以及每个容器的预配吞吐量。Azure Cosmos DB usage charges are determined by the number of provisioned containers, the number of hours the containers were online, and the provisioned throughput for each container.

有试用帐户吗?Is a trial account available?

如果不熟悉 Azure,可以注册 Azure 试用帐户,这样可以得到 30 天试用期和信用额度,以便试用所有 Azure 服务。If you're new to Azure, you can sign up for an Azure trial account, which gives you 30 days and a credit to try all the Azure services. 如果你有 Visual Studio 订阅,则还有资格免费获取 Azure 信用额度,可用于任何 Azure 服务。If you have a Visual Studio subscription, you're also eligible for free Azure credits to use on any Azure service.

也可以使用 Azure Cosmos DB 模拟器在本地免费开发和测试应用程序,无需创建 Azure 订阅。You can also use the Azure Cosmos DB Emulator to develop and test your application locally for free, without creating an Azure subscription. 如果对应用程序在 Azure Cosmos DB 模拟器中的工作情况感到满意,则可以切换到在云中使用 Azure Cosmos DB 帐户。When you're satisfied with how your application is working in the Azure Cosmos DB Emulator, you can switch to using an Azure Cosmos DB account in the cloud.

如何获取 Azure Cosmos DB 的更多帮助?How can I get additional help with Azure Cosmos DB?

若要咨询技术问题,可在问答论坛中发帖:To ask a technical question, you can post to the question and answer forums:

若要请求新功能,请在 Azure 支持站点上提交新的请求。To request new features, create a new request on Azure support site.

若要修复帐户问题,请在 Azure 门户中提交支持请求To fix an issue with your account, file a support request in the Azure portal.

设置 Azure Cosmos DBSet up Azure Cosmos DB

如何注册 Azure Cosmos DB?How do I sign up for Azure Cosmos DB?

可以在 Azure 门户中注册 Azure Cosmos DB。Azure Cosmos DB is available in the Azure portal. 首先,注册 Azure 订阅。First, sign up for an Azure subscription. 注册后,可将 Azure Cosmos DB 帐户添加到 Azure 订阅。After you've signed up, you can add an Azure Cosmos DB account to your Azure subscription.

什么是主密钥?What is a master key?

主密钥是用于访问帐户的所有资源的安全令牌。A master key is a security token to access all resources for an account. 拥有此密钥的人对数据库帐户中的所有资源具有读取和写入访问权。Individuals with the key have read and write access to all resources in the database account. 分发主密钥时需谨慎。Use caution when you distribute master keys. Azure 门户的“密钥”边栏选项卡中提供主要主密钥和辅助主密钥。 The primary master key and secondary master key are available on the Keys blade of the Azure portal. 有关密钥的详细信息,请参阅查看、复制和重新生成访问密钥For more information about keys, see View, copy, and regenerate access keys.

可以将 PreferredLocations 设置为哪些区域?What are the regions that PreferredLocations can be set to?

可以将 PreferredLocations 值设置为提供 Cosmos DB 的任何 Azure 区域。The PreferredLocations value can be set to any of the Azure regions in which Cosmos DB is available. 有关可用区域的列表,请参阅 Azure 区域For a list of available regions, see Azure regions.

通过 Azure 数据中心在中国分配数据时需要注意什么?Is there anything I should be aware of when distributing data across China via the Azure datacenters?

Azure Cosmos DB 存在于所有 Azure 中国区域,详见 Azure 区域页。Azure Cosmos DB is present across all Azure China regions, as specified on the Azure regions page. 由于它属于核心服务,因此每个新的数据中心都有 Azure Cosmos DB。Because it's the core service, every new datacenter has an Azure Cosmos DB presence.

设置区域时,请记住,Azure Cosmos DB 遵从主权和政府云的要求。When you set a region, remember that Azure Cosmos DB respects sovereign and government clouds. 也就是说,如果帐户是在某个主权区域中创建的,则不能在该主权区域外进行复制。That is, if you create an account in a sovereign region, you can't replicate out of that sovereign region. 同样,也无法通过外部帐户启用到其他主权位置的复制。Similarly, you can't enable replication into other sovereign locations from an outside account.

是否可以从容器级吞吐量预配切换到数据库级吞吐量预配?Is it possible to switch from container level throughput provisioning to database level throughput provisioning? 或者反之亦然?Or vice versa

容器级和数据库级吞吐量预配是不同的产品,在这两者之间切换需要将数据从源迁移到目标。Container and database level throughput provisioning are separate offerings and switching between either of these require migrating data from source to destination. 这意味着你需要创建新数据库或新容器,然后使用批量执行程序库Azure 数据工厂迁移数据。Which means you need to create a new database or a new container and then migrate data by using bulk executor library or Azure Data Factory.

Azure CosmosDB 是否支持时序分析?Does Azure CosmosDB support time series analysis?

是的,Azure CosmosDB 支持时序分析,下面是时序模式的示例。Yes Azure CosmosDB supports time series analysis, here is a sample for time series pattern. 此示例显示如何使用更改源来构建时序数据的聚合视图。This sample shows how to use change feed to build aggregated views over time series data. 可以使用 Spark 流式处理或其他流数据处理器来扩展此方法。You can extend this approach by using spark streaming or another stream data processor.

什么是 Azure Cosmos DB 服务配额和吞吐量限制What are the Azure Cosmos DB service quotas and throughput limits

有关详细信息,请参阅 Azure Cosmos DB 服务配额每个容器和数据库的吞吐量限制文章。See the Azure Cosmos DB service quotas and throughout limits per container and database articles for more information.


如何开始针对 SQL API 进行开发?How do I start developing against the SQL API?

首先必须注册 Azure 订阅。First you must sign up for an Azure subscription. 注册 Azure 订阅后,即可将 SQL API 容器添加到 Azure 订阅。Once you sign up for an Azure subscription, you can add a SQL API container to your Azure subscription. 有关添加 Azure Cosmos DB 帐户的说明,请参阅创建 Azure Cosmos 数据库帐户For instructions on adding an Azure Cosmos DB account, see Create an Azure Cosmos database account.

SDK 适用于 .NET、Python、Node.js、JavaScript 和 Java。SDKs are available for .NET, Python, Node.js, JavaScript, and Java. 开发人员也可以利用 RESTful HTTP API,从各种平台使用各种语言与 Azure Cosmos DB 资源进行交互。Developers can also use the RESTful HTTP APIs to interact with Azure Cosmos DB resources from various platforms and languages.

是否可以访问一些现成的示例来帮助自己入门?Can I access some ready-made samples to get a head start?

GitHub 上提供了 SQL API .NETJavaNode.jsPython SDK 的示例。Samples for the SQL API .NET, Java, Node.js, and Python SDKs are available on GitHub.

SQL API 数据库是否支持无架构数据?Does the SQL API database support schema-free data?

是的,SQL API 允许应用程序存储任意 JSON 文档,不需要架构定义或提示。Yes, the SQL API allows applications to store arbitrary JSON documents without schema definitions or hints. 通过 Azure Cosmos DB SQL 查询接口可立即查询数据。Data is immediately available for query through the Azure Cosmos DB SQL query interface.

SQL API 是否支持 ACID 事务?Does the SQL API support ACID transactions?

是的,SQL API 支持以 JavaScript 存储过程和触发器形式表示的跨文档事务。Yes, the SQL API supports cross-document transactions expressed as JavaScript-stored procedures and triggers. 事务以每个容器内的单个分区为范围,且以 ACID 语义执行,也就是“全有或全无”,与其他并发执行的代码和用户请求隔离。Transactions are scoped to a single partition within each container and executed with ACID semantics as "all or nothing," isolated from other concurrently executing code and user requests. 如果在服务器端执行 JavaScript 应用程序代码的过程中引发了异常,则会回滚整个事务。If exceptions are thrown through the server-side execution of JavaScript application code, the entire transaction is rolled back.

什么是容器?What is a container?

容器是文档及其关联的 JavaScript 应用程序逻辑的组。A container is a group of documents and their associated JavaScript application logic. 容器是一个计费实体,其成本由吞吐量和已用存储确定。A container is a billable entity, where the cost is determined by the throughput and used storage. 容器可以跨一个或多个分区或服务器,并且能缩放以处理几乎无限制增长的存储或吞吐量。Containers can span one or more partitions or servers and can scale to handle practically unlimited volumes of storage or throughput.

  • 对于 SQL API,容器映射到容器。For SQL API, a container maps to a Container.
  • 对于 Cosmos DB 的 API for MongoDB 帐户,容器映射到集合。For Cosmos DB's API for MongoDB accounts, a container maps to a Collection.
  • 对于 Cassandra 和表 API 帐户,容器映射到表。For Cassandra and Table API accounts, a container maps to a Table.
  • 对于 Gremlin API 帐户,容器映射到图表。For Gremlin API accounts, a container maps to a Graph.

容器也是 Azure Cosmos DB 的计费实体。Containers are also the billing entities for Azure Cosmos DB. 每个容器根据预配的吞吐量和已用的存储空间按小时计费。Each container is billed hourly, based on the provisioned throughput and used storage space. 有关详细信息,请参阅 Azure Cosmos DB 定价For more information, see Azure Cosmos DB Pricing.

我如何创建数据库?How do I create a database?

可以使用 Azure 门户(详见添加容器中所述)、某个 Azure Cosmos DB SDKREST API 来创建数据库。You can create databases by using the Azure portal, as described in Add a container, one of the Azure Cosmos DB SDKs, or the REST APIs.

我如何设置用户和权限?How do I set up users and permissions?

可以使用某个 Cosmos DB API SDKREST API 来创建用户和权限。You can create users and permissions by using one of the Cosmos DB API SDKs or the REST APIs.

SQL API 是否支持 SQL?Does the SQL API support SQL?

SQL API 支持的 SQL 查询语言是 SQL Server 支持的查询功能增强子集。The SQL query language supported by SQL API accounts is an enhanced subset of the query functionality that's supported by SQL Server. Azure Cosmos DB SQL 查询语言通过基于 JavaScript 的用户定义函数 (UDF),提供丰富的分层和关系运算符以及可扩展性。The Azure Cosmos DB SQL query language provides rich hierarchical and relational operators and extensibility via JavaScript-based, user-defined functions (UDFs). JSON 语法允许将 JSON 文档模型化为带标签节点的树状,由 Azure Cosmos DB 的自动索引技术及 Azure Cosmos DB 的 SQL 查询方言使用。JSON grammar allows for modeling JSON documents as trees with labeled nodes, which are used by both the Azure Cosmos DB automatic indexing techniques and the SQL query dialect of Azure Cosmos DB. 若要了解如何使用 SQL 语法,请参阅 SQL 查询一文。For information about using SQL grammar, see the SQL Query article.

SQL API 是否支持 SQL 聚合函数?Does the SQL API support SQL aggregation functions?

SQL API 支持通过聚合函数 COUNTMINMAXAVGSUM 通过 SQL 语法实现的任何规模的低延迟聚合。The SQL API supports low-latency aggregation at any scale via aggregate functions COUNT, MIN, MAX, AVG, and SUM via the SQL grammar. 有关详细信息,请参阅聚合函数For more information, see Aggregate functions.

SQL API 如何提供并发性?How does the SQL API provide concurrency?

SQL API 通过 HTTP 实体标记或 ETag 支持乐观并发控制 (OCC)。The SQL API supports optimistic concurrency control (OCC) through HTTP entity tags, or ETags. 每个 SQL API 资源都有一个 ETag。每次更新文档时,都会在服务器上设置此 ETag。Every SQL API resource has an ETag, and the ETag is set on the server every time a document is updated. ETag 标头和当前值包含在所有响应消息中。The ETag header and the current value are included in all response messages. ETag 可与 If-Match 标头配合使用,让服务器决定是否应更新资源。ETags can be used with the If-Match header to allow the server to decide whether a resource should be updated. If-Match 值是用作检查依据的 ETag 值。The If-Match value is the ETag value to be checked against. 如果 ETag 值与服务器的 ETag 值匹配,就会更新资源。If the ETag value matches the server ETag value, the resource is updated. 如果 ETag 不再是最新状态,则服务器会拒绝该操作,并提供“HTTP 412 不满足前提条件”响应代码。If the ETag is no longer current, the server rejects the operation with an "HTTP 412 Precondition failure" response code. 客户端接着必须重新提取资源,以获取该资源当前的 ETag 值。The client then refetches the resource to acquire the current ETag value for the resource. 此外,ETag 可以与 If-None-Match 标头配合使用,以确定是否需要重新提取资源。In addition, ETags can be used with the If-None-Match header to determine whether a refetch of a resource is needed.

若要在 .NET 中使用乐观并发,可以使用 AccessCondition 类。To use optimistic concurrency in .NET, use the AccessCondition class. 如需 .NET 示例,请参阅 GitHub 上 DocumentManagement 示例中的 Program.csFor a .NET sample, see Program.cs in the DocumentManagement sample on GitHub.

如何在 SQL API 中执行事务?How do I perform transactions in the SQL API?

SQL API 通过 JavaScript 存储过程和触发器支持语言集成式事务。The SQL API supports language-integrated transactions via JavaScript-stored procedures and triggers. 脚本中的所有数据库操作都是在进行快照隔离的情况下执行的。All database operations inside scripts are executed under snapshot isolation. 如果是单分区容器,则在容器范围内执行。If it's a single-partition container, the execution is scoped to the container. 如果容器已分区,则执行范围为该容器中具有相同分区键值的文档。If the container is partitioned, the execution is scoped to documents with the same partition-key value within the container. 文档版本 (ETag) 的快照是在事务开始时获取的,且只有当脚本成功运行时才会提交。A snapshot of the document versions (ETags) is taken at the start of the transaction and committed only if the script succeeds. 如果 JavaScript 引发错误,则会回滚事务。If the JavaScript throws an error, the transaction is rolled back. 有关详细信息,请参阅 Azure Cosmos DB 的服务器端 JavaScript 编程For more information, see Server-side JavaScript programming for Azure Cosmos DB.

如何将文档批量插入到 Document DB 中?How can I bulk-insert documents into Cosmos DB?

可以通过以下方法之一,将文档批量插入到 Azure Cosmos DB 中:You can bulk-insert documents into Azure Cosmos DB in one of the following ways:

是的,因为 Azure Cosmos DB 是 RESTful 服务,而资源链接固定不变,所以可以缓存。Yes, because Azure Cosmos DB is a RESTful service, resource links are immutable and can be cached. SQL API 客户端可以通过指定“If-None-Match”标头来读取任何资源(例如文档或容器),然后在服务器版本更改后更新本地副本。SQL API clients can specify an "If-None-Match" header for reads against any resource-like document or container and then update their local copies after the server version has changed.

SQL API 的本地实例是否可用?Is a local instance of SQL API available?

是的。Yes. Azure Cosmos DB 模拟器提供对 Cosmos DB 服务的高保真模拟。The Azure Cosmos DB Emulator provides a high-fidelity emulation of the Cosmos DB service. 它支持和 Azure Cosmos DB 相同的功能,包括支持创建和查询 JSON 文档、预配集合和调整集合的规模,以及执行存储过程和触发器。It supports functionality that's identical to Azure Cosmos DB, including support for creating and querying JSON documents, provisioning and scaling collections, and executing stored procedures and triggers. 可以使用 Azure Cosmos DB 模拟器开发和测试应用程序,并通过对 Azure Cosmos DB 的连接终结点进行单一配置更改将其部署到多区域范围的 Azure。You can develop and test applications by using the Azure Cosmos DB Emulator, and deploy them to Azure at a multiple-region scale by making a single configuration change to the connection endpoint for Azure Cosmos DB.

当从门户中的数据资源管理器查看时,为何会对文档中的长浮点值进行舍入?Why are long floating-point values in a document rounded when viewed from data explorer in the portal.

这是 JavaScript 的限制。This is limitation of JavaScript. JavaScript 根据 IEEE 754 中的规定使用双精度浮点格式的数字,并且只能安全地保存 -(253 - 1) 和 253-1(即 9007199254740991)之间的数字。JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and it can safely hold numbers between -(253 - 1) and 253-1 (i.e., 9007199254740991) only.

在对象层次结构中的何处启用权限?Where are permissions allowed in the object hierarchy?

可以在容器级别以及其下的级别(例如文档级别、附件级别)使用 ResourceTokens 来创建权限。Creating permissions by using ResourceTokens is allowed at the container level and its descendants (such as documents, attachments). 这意味着,目前不允许在数据库或帐户级别创建权限。This implies that trying to create a permission at the database or an account level isn't currently allowed.

Azure Cosmos DB 的用于 MongoDB 的 APIAzure Cosmos DB's API for MongoDB

Azure Cosmos DB 的用于 MongoDB 的 API 是什么?What is the Azure Cosmos DB's API for MongoDB?

Azure Cosmos DB 的 API for MongoDB 是一个线路协议兼容层,允许应用程序使用现有的、社区支持的 SDK 和用于 MongoDB 的驱动程序轻松、透明地与本机 Azure Cosmos 数据库引擎通信。The Azure Cosmos DB's API for MongoDB is a wire-protocol compatibility layer that allows applications to easily and transparently communicate with the native Azure Cosmos database engine by using existing, community-supported SDKs and drivers for MongoDB. 开发人员现在可以使用现有的 MongoDB 工具链和技术,生成能够充分利用 Azure Cosmos DB 的应用程序。Developers can now use existing MongoDB toolchains and skills to build applications that take advantage of Azure Cosmos DB. 开发人员可以使用 Azure Cosmos DB 的独特功能,其中包括带多主数据库复制功能的多区域分发、自动索引、备份维护、获得财务支持的服务级别协议 (SLA) 等。Developers benefit from the unique capabilities of Azure Cosmos DB, which include multiple-region distribution with multi-master replication, auto-indexing, backup maintenance, financially backed service level agreements (SLAs) etc.

如何连接到数据库?How do I connect to my database?

若要通过 Azure Cosmos DB 的用于 MongoDB 的 API 连接到 Cosmos 数据库,最快捷的方法是使用 Azure 门户The quickest way to connect to a Cosmos database with Azure Cosmos DB's API for MongoDB is to head over to the Azure portal. 转到帐户,然后在左侧导航菜单上单击“快速启动”。 Go to your account and then, on the left navigation menu, click Quick Start. 快速入门是获取连接到数据库的代码片段的最佳方式。Quickstart is the best way to get code snippets to connect to your database.

Azure Cosmos DB 实施严格的安全要求和标准。Azure Cosmos DB enforces strict security requirements and standards. Azure Cosmos DB 帐户需要通过 SSL 进行身份验证和安全通信,因此务必使用 TLSv1.2。Azure Cosmos DB accounts require authentication and secure communication via SSL, so be sure to use TLSv1.2.

有关详细信息,请参阅通过 Azure Cosmos DB 的用于 MongoDB 的 API 连接到 Cosmos 数据库For more information, see Connect to your Cosmos database with Azure Cosmos DB's API for MongoDB.

使用 Azure Cosmos DB 的用于 MongoDB 的 API 时,是否有其他需要处理的错误代码?Are there additional error codes that I need to deal with while using Azure Cosmos DB's API for MongoDB?

除了常见的 MongoDB 错误代码外,Azure Cosmos DB 的用于 MongoDB 的 API 还有自己的特定错误代码:Along with the common MongoDB error codes, the Azure Cosmos DB's API for MongoDB has its own specific error codes:

错误Error 代码Code 说明Description 解决方案Solution
TooManyRequestsTooManyRequests 1650016500 使用的请求单位总数超过了容器的预配请求单位率,已被限制。The total number of request units consumed is more than the provisioned request-unit rate for the container and has been throttled. 考虑从 Azure 门户中对分配给一个容器或一组容器的吞吐量进行缩放,或者重试。Consider scaling the throughput assigned to a container or a set of containers from the Azure portal or retrying again.
ExceededMemoryLimitExceededMemoryLimit 1650116501 作为一种多租户服务,操作已超出客户端的内存配额。As a multi-tenant service, the operation has gone over the client's memory allotment. 通过限制性更强的查询条件缩小操作的作用域,或者通过 Azure 门户联系技术支持。Reduce the scope of the operation through more restrictive query criteria or contact support from the Azure portal.

示例:    db.getCollection('users').aggregate([Example:     db.getCollection('users').aggregate([
        {$match: {name:"Andy"}},        {$match: {name: "Andy"}},
        {$sort: {age: -1}}        {$sort: {age: -1}}
    ]))    ]))

是否支持将 MongoDB 的 Simba 驱动程序与 Azure CosmosDB 的用于 MongoDB 的 API 一起使用?Is the Simba driver for MongoDB supported for use with Azure Cosmos DB's API for MongoDB?

是的,可以将 Simba 的 Mongo ODBC 驱动程序与 Azure Cosmos DB 的用于 MongoDB 的 API 一起使用Yes, you can use Simba's Mongo ODBC driver with Azure Cosmos DB's API for MongoDB

表 APITable API

如何使用表 API 产品/服务?How can I use the Table API offering?

Azure 门户提供 Azure Cosmos DB 表 API。The Azure Cosmos DB Table API is available in the Azure portal. 首先必须注册 Azure 订阅。First you must sign up for an Azure subscription. 注册以后,即可向 Azure 订阅添加 Azure Cosmos DB 表 API 帐户,然后向帐户添加表。After you've signed up, you can add an Azure Cosmos DB Table API account to your Azure subscription, and then add tables to your account.

可以在 Azure Cosmos DB 表 API 简介中找到支持的语言和相关的快速入门。You can find the supported languages and associated quick-starts in the Introduction to Azure Cosmos DB Table API.

是否需要新的 SDK 才能使用表 API?Do I need a new SDK to use the Table API?

不是,现有的存储 SDK 仍然适用。No, existing storage SDKs should still work. 但是,我们建议始终使用最新的 SDK,以获得最佳支持,并在许多场合下获得优异的性能。However, it's recommended that one always gets the latest SDKs for the best support and in many cases superior performance. 请参阅 Azure Cosmos DB 表 API 简介中的可用语言列表。See the list of available languages in the Introduction to Azure Cosmos DB Table API.

表 API 与 Azure 表存储的行为有哪些不同之处?Where is Table API not identical with Azure Table storage behavior?

想要使用 Azure Cosmos DB 表 API 创建表的 Azure 表存储用户应注意以下这些行为差异:There are some behavior differences that users coming from Azure Table storage who want to create tables with the Azure Cosmos DB Table API should be aware of:

  • Azure Cosmos DB 表 API 使用保留容量模型来保障性能,但这意味着,一旦创建了表,就必须立即支付容量费用,即使容量未被使用。Azure Cosmos DB Table API uses a reserved capacity model in order to ensure guaranteed performance but this means that one pays for the capacity as soon as the table is created, even if the capacity isn't being used. 使用 Azure 表存储时,只需为使用的容量付费。With Azure Table storage one only pays for capacity that's used. 这也说明了,表 API 在 99% 的时间里为何能够提供 10 毫秒的读取延迟和 15 毫秒的写入延迟 SLA,而 Azure 表存储提供 10 秒延迟 SLA。This helps to explain why Table API can offer a 10 ms read and 15 ms write SLA at the 99th percentile while Azure Table storage offers a 10-second SLA. 因此,使用表 API 表(即使是不带任何请求的空表)时,要达到 Azure Cosmos DB 所提供的 SLA,必须支付费用来确保提供所需的容量来处理对这些表发出的所有请求。But as a consequence, with Table API tables, even empty tables without any requests, cost money in order to ensure the capacity is available to handle any requests to them at the SLA offered by Azure Cosmos DB.
  • 表 API 返回的查询结果未按分区键/行键顺序排序,因为它们在 Azure 表存储中。Query results returned by the Table API aren't sorted in partition key/row key order as they are in Azure Table storage.
  • 行键最多只能包含 255 个字节Row keys can only be up to 255 bytes
  • 批最多只能包含 2 MBBatches can only have up to 2 MBs
  • 目前不支持 CORS。CORS isn't currently supported
  • Azure 表存储中的表名不区分大小写,但出现在 Azure Cosmos DB 表 API 中Table names in Azure Table storage aren't case-sensitive, but they are in Azure Cosmos DB Table API
  • Azure Cosmos DB 的某些编码信息内部格式,例如二进制字段,目前不如想像的那么有效。Some of Azure Cosmos DB's internal formats for encoding information, such as binary fields, are currently not as efficient as one might like. 因此,这会导致数据大小受到意外限制。Therefore this can cause unexpected limitations on data size. 例如,目前无法使用整整有 1 MB 的表实体来存储二进制数据,因为编码会增大数据大小。For example, currently one couldn't use the full one Meg of a table entity to store binary data because the encoding increases the data's size.
  • 当前不支持实体属性名称“ID”Entity property name 'ID' currently not supported
  • TableQuery TakeCount 不限为 1000 以内TableQuery TakeCount isn't limited to 1000

对于 REST API,有大量的终结点/查询选项不受 Azure Cosmos DB 表 API 的支持:In terms of the REST API there are a number of endpoints/query options that aren't supported by Azure Cosmos DB Table API:

REST 方法Rest Method(s) REST 终结点/查询选项Rest Endpoint/Query Option 文档 URLDoc URLs 说明Explanation
GET、PUTGET, PUT /?restype=service@comp=properties/?restype=service@comp=properties 设置表服务属性获取表服务属性Set Table Service Properties and Get Table Service Properties 此终结点用于设置 CORS 规则、存储分析配置和日志记录设置。This endpoint is used to set CORS rules, storage analytics configuration, and logging settings. CORS 目前不受支持,Azure Cosmos DB 与 Azure 存储表中以不同的方式处理分析和日志记录CORS is currently not supported and analytics and logging are handled differently in Azure Cosmos DB than Azure Storage Tables
OPTIONSOPTIONS /<table-resource-name>/<table-resource-name> 预检 CORS 表请求Pre-flight CORS table request 这是 Azure Cosmos DB 目前不支持的 CORS 部分。This is part of CORS which Azure Cosmos DB doesn't currently support.
GETGET /?restype=service@comp=stats/?restype=service@comp=stats 获取表服务统计信息Get Table Service Stats 提供有关主节点与辅助节点之间的数据复制速度的信息。Provides information how quickly data is replicating between primary and secondaries. 由于复制是写入的一部分,因此在 Cosmos DB 中不需要此选项。This isn't needed in Cosmos DB as the replication is part of writes.
GET、PUTGET, PUT /mytable?comp=acl/mytable?comp=acl 获取表 ACL设置表 ACLGet Table ACL and Set Table ACL 获取和设置用于管理共享访问签名 (SAS) 的存储访问策略。This gets and sets the stored access policies used to manage Shared Access Signatures (SAS). 尽管支持 SAS,但其设置和管理方式不同。Although SAS is supported, they are set and managed differently.

此外,Azure Cosmos DB 表 API 仅支持 JSON 格式,而不支持 ATOM。In addition Azure Cosmos DB Table API only supports the JSON format, not ATOM.

尽管 Azure Cosmos DB 支持共享访问签名 (SAS),但它不支持某些策略,具体而言,是与管理操作相关的策略,例如创建新表的权限。While Azure Cosmos DB supports Shared Access Signatures (SAS) there are certain policies it doesn't support, specifically those related to management operations such as the right to create new tables.

对于特定的 .NET SDK,Azure Cosmos DB 目前不支持某些类和方法。For the .NET SDK in particular, there are some classes and methods that Azure Cosmos DB doesn't currently support.

Class 不支持的方法Unsupported Method
CloudTableClientCloudTableClient *ServiceProperties**ServiceProperties*
CloudTableCloudTable SetPermissions*SetPermissions*
TableServiceContextTableServiceContext *(此类已弃用)* (this class is deprecated)
TableServiceEntityTableServiceEntity " "" "
TableServiceExtensionsTableServiceExtensions " "" "
TableServiceQueryTableServiceQuery " "" "

如何提供有关 SDK 或 Bug 的反馈?How do I provide feedback about the SDK or bugs?

可通过以下途径提供反馈:You can share your feedback in any of the following ways:

连接到表 API 需要使用哪个连接字符串?What is the connection string that I need to use to connect to the Table API?

连接字符串为:The connection string is:

DefaultEndpointsProtocol=https;AccountName=<AccountNamefromCosmos DB;AccountKey=<FromKeysPaneofCosmosDB>;TableEndpoint=https://<AccountName>.table.cosmos.azure.cn

可以通过 Azure 门户中的“连接字符串”页获取连接字符串。You can get the connection string from the Connection String page in the Azure portal.

如何在表 API 的 .NET SDK 中替代请求选项的配置设置?How do I override the config settings for the request options in the .NET SDK for the Table API?

有些设置是通过 CreateCloudTableClient 方法处理的,还有一些设置是通过客户端应用程序中 appSettings 节的 app.config 处理的。Some settings are handled on the CreateCloudTableClient method and other via the app.config in the appSettings section in the client application. 有关配置设置的信息,请参阅 Azure Cosmos DB 功能For information about config settings, see Azure Cosmos DB capabilities.

使用现有 Azure 表存储 SDK 的客户是否需要进行任何更改?Are there any changes for customers who are using the existing Azure Table storage SDKs?

无。None. 使用现有 Azure 表存储 SDK 的现有客户或新客户无需进行任何更改。There are no changes for existing or new customers who are using the existing Azure Table storage SDKs.

如何查看 Azure Cosmos DB 中存储的用于表 API 的表数据?How do I view table data that's stored in Azure Cosmos DB for use with the Table API?

可以使用 Azure 门户浏览数据。You can use the Azure portal to browse the data. 也可以使用表 API 代码或下一个问题答案中提到的工具。You can also use the Table API code or the tools mentioned in the next answer.

哪些工具适用于表 API?Which tools work with the Table API?

可以使用 Azure 存储资源管理器You can use the Azure Storage Explorer.

灵活地采用之前指定格式的连接字符串的工具可以支持新的表 API。Tools with the flexibility to take a connection string in the format specified previously can support the new Table API. Azure 存储客户端工具页中提供了表工具列表。A list of table tools is provided on the Azure Storage Client Tools page.

操作并发性是否受控制?Is the concurrency on operations controlled?

是的,通过使用 ETag 机制提供乐观并发。Yes, optimistic concurrency is provided via the use of the ETag mechanism.

是否支持针对实体的 OData 查询模型?Is the OData query model supported for entities?

是,表 API 支持 OData 查询和 LINQ 查询。Yes, the Table API supports OData query and LINQ query.

是否可以在同一应用程序中同时连接到 Azure 表存储和 Azure Cosmos DB 表 API?Can I connect to Azure Table Storage and Azure Cosmos DB Table API side by side in the same application?

是的,可以通过创建两个不同的 CloudTableClient 实例进行连接,每个实例都通过连接字符串指向各自的 URI。Yes, you can connect by creating two separate instances of the CloudTableClient, each pointing to its own URI via the connection string.

如何将现有 Azure 表存储应用程序迁移到此服务?How do I migrate an existing Azure Table storage application to this offering?

支持使用 AzCopyAzure Cosmos DB 数据迁移工具AzCopy and the Azure Cosmos DB Data Migration Tool are both supported.

如何为此服务扩展存储大小,比如,最初我有 n GB 的数据,但一段时间后我的数据会增长到 1 TB?How is expansion of the storage size done for this service if, for example, I start with n GB of data and my data will grow to 1 TB over time?

根据设计,可以通过横向缩放让 Azure Cosmos DB 提供无限的存储。Azure Cosmos DB is designed to provide unlimited storage via the use of horizontal scaling. 可以通过服务来监视并有效地增大存储。The service can monitor and effectively increase your storage.

如何监视表 API 服务?How do I monitor the Table API offering?

可以使用表 API 的“指标” 窗格来监视请求和存储使用情况。You can use the Table API Metrics pane to monitor requests and storage usage.

如何计算所需的吞吐量?How do I calculate the throughput I require?

可以使用容量估算器来计算操作所需的 TableThroughput。You can use the capacity estimator to calculate the TableThroughput that's required for the operations. 有关详细信息,请参阅 Estimate Request Units and Data Storage(估算请求单位和数据存储)。For more information, see Estimate Request Units and Data Storage. 通常,可以将实体显示为 JSON 并且为操作提供所需数量。In general, you can show your entity as JSON and provide the numbers for your operations.

是否可以在本地将表 API SDK 用于模拟器?Can I use the Table API SDK locally with the emulator?

目前没有。Not at this time.

现有的应用程序是否适用于表 API?Can my existing application work with the Table API?

是的,支持相同的 API。Yes, the same API is supported.

如果我不想使用表 API 功能,是否需要将现有 Azure 表存储应用程序迁移到该 SDK?Do I need to migrate my existing Azure Table storage applications to the SDK if I don't want to use the Table API features?

否,可以在没有任何干扰的情况下创建和使用现有 Azure 表存储资产。No, you can create and use existing Azure Table storage assets without interruption of any kind. 但是,如果不使用表 API,则无法从自动索引、其他一致性选项或多区域分发中受益。However, if you don't use the Table API, you can't benefit from the automatic index, the additional consistency option, or multiple-region distribution.

如何在跨多个 Azure 区域的表 API 中添加数据复制?How do I add replication of the data in the Table API across more than one region of Azure?

可以使用 Azure Cosmos DB 门户的多区域复制设置来添加适合应用程序的区域。You can use the Azure Cosmos DB portal's multiple-region replication settings to add regions that are suitable for your application. 若要开发多区域分布式应用程序,还应添加其 PreferredLocation 信息已设置为本地区域的应用程序,以减轻读取延迟。To develop a multiple-regionally distributed application, you should also add your application with the PreferredLocation information set to the local region for providing low read latency.

如何在表 API 中更改帐户的主要写入区域?How do I change the primary write region for the account in the Table API?

可以使用 Azure Cosmos DB 的多区域复制门户窗格来添加区域,然后故障转移到所需的区域。You can use the Azure Cosmos DB multiple-region replication portal pane to add a region and then fail over to the required region. 有关说明,请参阅使用多区域 Azure Cosmos DB 帐户进行开发For instructions, see Developing with multi-region Azure Cosmos DB accounts.

如何配置首选的读取区域以降低分配数据时的延迟?How do I configure my preferred read regions for low latency when I distribute my data?

请使用 app.config 文件中的 PreferredLocation 键,方便从本地位置读取。To help read from the local location, use the PreferredLocation key in the app.config file. 对于现有应用程序,如果设置 LocationMode,表 API 会引发错误。For existing applications, the Table API throws an error if LocationMode is set. 请删除该代码,因为表 API 会从 app.config 文件中选取此信息。Remove that code, because the Table API picks up this information from the app.config file.

如何理解表 API 中的一致性级别?How should I think about consistency levels in the Table API?

Azure Cosmos DB 在一致性、可用性和延迟之间提供合理的平衡。Azure Cosmos DB provides well-reasoned trade-offs between consistency, availability, and latency. Azure Cosmos DB 为表 API 开发人员提供五个一致性级别,因此可以在表级别选择合适的一致性模型,并在查询数据时发出相应的请求。Azure Cosmos DB offers five consistency levels to Table API developers, so you can choose the right consistency model at the table level and make individual requests while querying the data. 客户端可以在连接时指定一致性级别。When a client connects, it can specify a consistency level. 可以通过 CreateCloudTableClient 的 consistencyLevel 参数更改级别。You can change the level via the consistencyLevel argument of CreateCloudTableClient.

如果将“有限过时”一致性设置为默认值,表 API 可通过“读取自己写入的数据”提供低延迟的读取。The Table API provides low-latency reads with "Read your own writes," with Bounded-staleness consistency as the default. 有关详细信息,请参阅一致性级别For more information, see Consistency levels.

Azure 表存储默认在区域中提供“强”一致性,在辅助位置提供“最终”一致性。By default, Azure Table storage provides Strong consistency within a region and Eventual consistency in the secondary locations.

Azure Cosmos DB 表 API 是否比 Azure 表存储提供更多的一致性级别?Does Azure Cosmos DB Table API offer more consistency levels than Azure Table storage?

是的。若要了解如何利用 Azure Cosmos DB 的分布式特性,请参阅一致性级别Yes, for information about how to benefit from the distributed nature of Azure Cosmos DB, see Consistency levels. 由于一致性级别有充分的保证,因此可以放心使用。Because guarantees are provided for the consistency levels, you can use them with confidence.

启用多区域分发后,需要花费多长时间复制数据?When multiple-region distribution is enabled, how long does it take to replicate the data?

Azure Cosmos DB 会在本地区域持续提交数据,然后在几毫秒内将数据立即推送到其他区域。Azure Cosmos DB commits the data durably in the local region and pushes the data to other regions immediately in a matter of milliseconds. 这种形式的复制只取决于数据中心的往返时间 (RTT)。This replication is dependent only on the round-trip time (RTT) of the datacenter. 若要详细了解 Azure Cosmos DB 的全球分布功能,请参阅 Azure Cosmos DB:Azure 上的多区域分布式数据库服务To learn more about the global-distribution capability of Azure Cosmos DB, see Azure Cosmos DB: A multiple-regionally distributed database service on Azure.

是否可以更改读取请求一致性级别?Can the read request consistency level be changed?

使用 Azure Cosmos DB 时,可以在容器级别(在表上)设置一致性级别。With Azure Cosmos DB, you can set the consistency level at the container level (on the table). 使用 .NET SDK,可以通过在 app.config 文件中提供 TableConsistencyLevel 键值来更改级别。By using the .NET SDK, you can change the level by providing the value for TableConsistencyLevel key in the app.config file. 可能的值包括:“强”、“有限过期”、“会话”、“一致前缀”和“最终”。The possible values are: Strong, Bounded Staleness, Session, Consistent Prefix, and Eventual. 有关详细信息,请参阅 Azure Cosmos DB 中可以优化的数据一致性级别For more information, see Tunable data consistency levels in Azure Cosmos DB. 关键是不能将请求的一致性级别设置为高于表的一致性级别。The key idea is that you can't set the request consistency level at more than the setting for the table. 例如,不能将表的一致性级别设置为“最终”,而将请求的一致性级别设置为“非常”。For example, you can't set the consistency level for the table at Eventual and the request consistency level at Strong.

在某个区域出现故障时,表 API 如何处理故障转移?How does the Table API handle failover if a region goes down?

表 API 利用 Azure Cosmos DB 的多区域分布式平台。The Table API leverages the multiple-regionally distributed platform of Azure Cosmos DB. 若要确保应用程序能够容许数据中心停机,请在 Azure Cosmos DB 门户中为帐户额外启用至少一个区域。详见使用多区域 Azure Cosmos DB 帐户进行开发To ensure that your application can tolerate datacenter downtime, enable at least one more region for the account in the Azure Cosmos DB portal Developing with multi-region Azure Cosmos DB accounts. 可以使用门户设置区域的优先级。请参阅使用多区域 Azure Cosmos DB 帐户进行开发You can set the priority of the region by using the portal Developing with multi-region Azure Cosmos DB accounts.

可以视需要为帐户添加任意数目的区域,并通过提供故障转移优先级来控制可将该帐户故障转移到哪个位置。You can add as many regions as you want for the account and control where it can fail over to by providing a failover priority. 若要使用数据库,还需要在那里提供一个应用程序。To use the database, you need to provide an application there too. 这样,客户就不会遇到停机情况。When you do so, your customers won't experience downtime. 最新的 .NET 客户端 SDK 可自动寻址,但其他 SDK 则不可以。The latest .NET client SDK is auto homing but the other SDKs aren't. 换句话说,它能够检测到有故障的区域,并自动故障转移到新区域。That is, it can detect the region that's down and automatically fail over to the new region.

是否能够为表 API 启用备份?Is the Table API enabled for backups?

可以,表 API 利用 Azure Cosmos DB 的平台进行备份。Yes, the Table API leverages the platform of Azure Cosmos DB for backups. 可自动进行备份。Backups are made automatically. 有关详细信息,请参阅使用 Azure Cosmos DB 进行联机备份和还原For more information, see Online backup and restore with Azure Cosmos DB.

表 API 是否默认对实体的所有属性编制索引?Does the Table API index all attributes of an entity by default?

是的,默认情况下会为实体的所有属性编制索引。Yes, all attributes of an entity are indexed by default. 有关详细信息,请参阅 Azure Cosmos DB:索引策略For more information, see Azure Cosmos DB: Indexing policies.

这是否意味着,无需创建多个索引来满足查询要求?Does this mean I don't have to create more than one index to satisfy the queries?

是,Azure Cosmos DB 表 API 针对所有属性提供自动索引,无需任何架构定义。Yes, Azure Cosmos DB Table API provides automatic indexing of all attributes without any schema definition. 这种自动化操作可以让开发人员将工作重点放在应用程序上,不必考虑索引的创建和管理。This automation frees developers to focus on the application rather than on index creation and management. 有关详细信息,请参阅 Azure Cosmos DB:索引策略For more information, see Azure Cosmos DB: Indexing policies.

是否可以更改索引策略?Can I change the indexing policy?

是的,可以通过提供索引定义来更改索引策略。Yes, you can change the indexing policy by providing the index definition. 需要在 app.config 文件中使用字符串 json 格式You need to properly encode and escape the settings.

对于非 .NET SDK,只能在门户中设置索引策略,方法是:打开“数据资源管理器”,导航到想要更改的特定表,转到“缩放和设置”->“索引策略”,进行所需的更改,并单击“保存”。 For the non-.NET SDKs, the indexing policy can only be set in the portal at Data Explorer, navigate to the specific table you want to change and then go to the Scale & Settings->Indexing Policy, make the desired change and then Save.

对于 .NET SDK,可以在 app.config 文件中提交更改:From the .NET SDK it can be submitted in the app.config file:

  "indexingMode": "consistent",
  "automatic": true,
  "includedPaths": [
      "path": "/somepath",
      "indexes": [
          "kind": "Range",
          "dataType": "Number",
          "precision": -1
          "kind": "Range",
          "dataType": "String",
          "precision": -1
      "path": "/anotherpath"

平台形式的 Azure Cosmos DB 似乎有许多的功能,例如排序、聚合、分层等。Azure Cosmos DB as a platform seems to have lot of capabilities, such as sorting, aggregates, hierarchy, and other functionality. 是否要向表 API 添加这些功能?Will you be adding these capabilities to the Table API?

表 API 提供与 Azure 表存储相同的查询功能。The Table API provides the same query functionality as Azure Table storage. Azure Cosmos DB 还支持排序、聚合、地理空间查询、层次结构和各种内置函数。Azure Cosmos DB also supports sorting, aggregates, geospatial query, hierarchy, and a wide range of built-in functions. 有关详细信息,请参阅 SQL 查询For more information, see SQL queries.

何时应更改表 API 的 TableThroughput?When should I change TableThroughput for the Table API?

应在符合以下某个条件时,更改 TableThroughput:You should change TableThroughput when either of the following conditions applies:

  • 要对数据执行提取、转换和加载 (ETL) 操作,或者需在短时间内上传大量数据。You're performing an extract, transform, and load (ETL) of data, or you want to upload a lot of data in short amount of time.
  • 需要后端的容器或容器组提供更大的吞吐量。You need more throughput from the container or from a set of containers at the back end. 例如,发现已用吞吐量超过预配吞吐量,且吞吐量已达到限制。For example, you see that the used throughput is more than the provisioned throughput, and you're getting throttled. 有关详细信息,请参阅为 Azure Cosmos 容器设置吞吐量For more information, see Set throughput for Azure Cosmos containers.

是否可以纵向扩展纵向缩减表 API 表的吞吐量?Can I scale up or scale down the throughput of my Table API table?

是的,可以使用 Azure Cosmos DB 门户的缩放窗格来缩放吞吐量。Yes, you can use the Azure Cosmos DB portal's scale pane to scale the throughput. 有关详细信息,请参阅设置吞吐量For more information, see Set throughput.

新预配的表是否设置了默认的 TableThroughput?Is a default TableThroughput set for newly provisioned tables?

是,如果未通过 app.config 替代 TableThroughput,并且未使用 Azure Cosmos DB 中预创建的容器,服务则会创建吞吐量为 400 的表。Yes, if you don't override the TableThroughput via app.config and don't use a pre-created container in Azure Cosmos DB, the service creates a table with throughput of 400.

对于 Azure 表存储服务的现有客户,定价是否有任何变化?Is there any change of pricing for existing customers of the Azure Table storage service?

无。None. 对于现有的 Azure 表存储客户,价格上没有任何更改。There's no change in price for existing Azure Table storage customers.

表 API 的价格是如何计算的?How is the price calculated for the Table API?

价格取决于分配的 TableThroughput。The price depends on the allocated TableThroughput.

如何在表 API 服务中处理对表设置的任何速率限制?How do I handle any rate limiting on the tables in Table API offering?

如果请求速率超出了为基础容器或容器组预配的吞吐量的容量,则会出现错误,SDK 会使用重试策略重试调用。If the request rate is more than the capacity of the provisioned throughput for the underlying container or a set of containers, you get an error, and the SDK retries the call by applying the retry policy.

为何需要选择吞吐量而不是 PartitionKey 和 RowKey 来利用 Azure Cosmos DB 的表 API 服务?Why do I need to choose a throughput apart from PartitionKey and RowKey to take advantage of the Table API offering of Azure Cosmos DB?

如果未在 app.config 文件中或通过门户提供吞吐量,Azure Cosmos DB 将为容器设置默认的吞吐量。Azure Cosmos DB sets a default throughput for your container if you don't provide one in the app.config file or via the portal.

Azure Cosmos DB 针对操作设置上限,在性能和延迟方面提供保证。Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operation. 如果引擎可以针对租户的操作实施调控,则可以提供这样的保证。This guarantee is possible when the engine can enforce governance on the tenant's operations. 设置 TableThroughput 可确保吞吐量和延迟值得到保证,因为平台会保留相应的容量,保证操作成功。Setting TableThroughput ensures that you get the guaranteed throughput and latency, because the platform reserves this capacity and guarantees operational success.

考虑到应用程序在使用需求方面存在“季节性”,可以通过吞吐量规范来弹性更改吞吐量,在满足吞吐量需求的同时节省成本。By using the throughput specification, you can elastically change it to benefit from the seasonality of your application, meet the throughput needs, and save costs.

Azure 表存储对我而言非常便宜,因为我只需支付数据的存储费用,并且我很少进行查询。Azure Table storage has been inexpensive for me, because I pay only to store the data, and I rarely query. 但是,即使我未执行任何事务或存储任何数据,Azure Cosmos DB 表 API 服务似乎也要收费。The Azure Cosmos DB Table API offering seems to be charging me even though I haven't performed a single transaction or stored anything. 是否能解释一下?Can you explain?

根据设计,Azure Cosmos DB 是一个多区域分布式的、基于 SLA 的系统,在可用性、延迟和吞吐量方面提供保证。Azure Cosmos DB is designed to be a multiple-regionally distributed, SLA-based system with guarantees for availability, latency, and throughput. 在 Azure Cosmos DB 中保留吞吐量时,获得的保障与其他系统不同。When you reserve throughput in Azure Cosmos DB, it's guaranteed, unlike the throughput of other systems. Azure Cosmos DB 会根据客户请求提供额外功能,例如辅助索引和多区域分布。Azure Cosmos DB provides additional capabilities that customers have requested, such as secondary indexes and multiple-region distribution.

在向 Azure 表存储引入数据时,我从未收到过“配额已满”通知(指示分区已满)。I never get a quota full" notification (indicating that a partition is full) when I ingest data into Azure Table storage. 但使用表 API 时会收到此消息。With the Table API, I do get this message. 是此产品有限制,迫使我更改现有的应用程序吗?Is this offering limiting me and forcing me to change my existing application?

Azure Cosmos DB 是基于 SLA 的系统,可提供无限缩放,并在延迟、吞吐量、可用性和一致性方面提供保证。Azure Cosmos DB is an SLA-based system that provides unlimited scale, with guarantees for latency, throughput, availability, and consistency. 请确保数据大小和索引可管理且可缩放,这样才能获得有保障的高级性能。To ensure guaranteed premium performance, make sure that your data size and index are manageable and scalable. 针对每个分区键的实体或项数实施 10 GB 限制是为了确保提供优异的查找和查询性能。The 10-GB limit on the number of entities or items per partition key is to ensure that we provide great lookup and query performance. 若要确保即使针对 Azure 存储,应用程序也能很好地进行缩放,建议不要创建热分区,即,将所有信息存储在一个分区内并查询它。To ensure that your application scales well, even for Azure Storage, we recommend that you not create a hot partition by storing all information in one partition and querying it.

表 API 是否仍然需要 PartitionKey 和 RowKey?So PartitionKey and RowKey are still required with the Table API?

是的。Yes. 由于表 API 的外围应用类似于 Azure 表存储 SDK,因此使用分区键可以高效地分发数据。Because the surface area of the Table API is similar to that of the Azure Table storage SDK, the partition key provides an efficient way to distribute the data. 行键在该分区中是唯一的。The row key is unique within that partition. 行键必须存在,且不能为 null(在标准 SDK 中可为 null)。The row key needs to be present and can't be null as in the standard SDK. RowKey 的长度为 255 个字节,PartitionKey 的长度为 1 KB。The length of RowKey is 255 bytes and the length of PartitionKey is 1 KB.

表 API 的错误消息有哪些?What are the error messages for the Table API?

Azure 表存储和 Azure Cosmos DB 表 API 使用相同的 SDK,因此,大多数错误是相同的。Azure Table storage and Azure Cosmos DB Table API use the same SDKs so most of the errors will be the same.

在表 API 中尝试一个接一个地创建许多表时,为何会受到限制?Why do I get throttled when I try to create lot of tables one after another in the Table API?

Azure Cosmos DB 是基于 SLA 的系统,在可用性、延迟和吞吐量方面提供保障。Azure Cosmos DB is an SLA-based system that provides latency, throughput, availability, and consistency guarantees. 由于它是预配的系统,因此会保留资源来保证满足这些要求。Because it's a provisioned system, it reserves resources to guarantee these requirements. 以极快的频率创建表会被系统检测到,因此会受到限制。The rapid rate of creation of tables is detected and throttled. 建议你查看表的创建频率,将其降到每分钟 5 个表以下。We recommend that you look at the rate of creation of tables and lower it to less than 5 per minute. 请记住,表 API 是预配的系统。Remember that the Table API is a provisioned system. 只要预配它,就必须付费。The moment you provision it, you'll begin to pay for it.

Gremlin APIGremlin API

对于 C#/.NET 开发,我是否应使用 Microsoft.Azure.Graphs 包或 Gremlin.NET?For C#/.NET development, should I use the Microsoft.Azure.Graphs package or Gremlin.NET?

Azure Cosmos DB Gremlin API 利用开源驱动程序作为服务的主要连接器。Azure Cosmos DB Gremlin API leverages the open-source drivers as the main connectors for the service. 因此,建议的选项是使用 Apache Tinkerpop 支持的驱动程序So the recommended option is to use drivers that are supported by Apache Tinkerpop.

在图形数据库上运行查询时如何针对每秒的 RU 数目收费?How are RU/s charged when running queries on a graph database?

所有图形对象、顶点和边,都显示为后端中的 JSON 文档。All graph objects, vertices, and edges, are shown as JSON documents in the backend. 由于一个 Gremlin 查询一次可以修改一个或许多图形对象,因此与之相关联的开销与该查询处理的对象、边直接相关。Since one Gremlin query can modify one or many graph objects at a time, the cost associated with it is directly related to the objects, edges that are processed by the query. 这是 Azure Cosmos DB 对所有其他 API 使用的同一进程。This is the same process that Azure Cosmos DB uses for all other APIs. 有关详细信息,请参阅 Azure Cosmos DB 中的请求单位For more information, see Request Units in Azure Cosmos DB.

RU 费用取决于遍历的工作数据集,而不是结果集。The RU charge is based on the working data set of the traversal, and not the result set. 例如,如果查询旨在获取单个顶点作为结果,但在此过程中需要遍历多个其他对象,那么成本将取决于计算一个结果顶点所需的全部图形对象。For example, if a query aims to obtain a single vertex as a result but needs to traverse more than one other object on the way, then the cost will be based on all the graph objects that it will take to compute the one result vertex.

图形数据库可在 Azure Cosmos DB Gremlin API 中拥有的最大规模是怎样的?What's the maximum scale that a graph database can have in Azure Cosmos DB Gremlin API?

Azure Cosmos DB 利用水平分区自动满足增加存储和吞吐量的需求。Azure Cosmos DB makes use of horizontal partitioning to automatically address increase in storage and throughput requirements. 工作负荷的最大吞吐量和存储容量由与给定容器关联的分区数量决定。The maximum throughput and storage capacity of a workload is determined by the number of partitions that are associated with a given container. 但是,Gremlin API 容器有一组用于按规模确保适当性能体验的特定准则。However, a Gremlin API container has a specific set of guidelines to ensure a proper performance experience at scale. 有关分区和最佳做法的详细信息,请参阅 Azure Cosmos DB 中的分区一文。For more information about partitioning, and best practices, see partitioning in Azure Cosmos DB article.

如何使用 Gremlin 驱动程序防范注入式攻击?How can I protect against injection attacks using Gremlin drivers?

大多数本机 Apache Tinkerpop Gremlin 驱动程序允许用户选择为执行查询提供参数字典。Most native Apache Tinkerpop Gremlin drivers allow the option to provide a dictionary of parameters for query execution. 这里提供了有关如何执行此操作的示例(分别以 Gremlin.NetGremlin Javascript 编写)。This is an example of how to do it in Gremlin.Net and in Gremlin-Javascript.

为什么我收到“Gremlin 查询编译错误:找不到任何方法”错误?Why am I getting the "Gremlin Query Compilation Error: Unable to find any method" error?

Azure Cosmos DB Gremlin API 实现了 Gremlin 图面区域中定义的功能的子集。Azure Cosmos DB Gremlin API implements a subset of the functionality defined in the Gremlin surface area. 有关受支持的步骤和详细信息,请参阅 Gremlin 支持一文。For supported steps and more information, see Gremlin support article.

最佳解决方法是使用受支持的功能重新编写所需的 Gremlin 步骤,因为 Azure Cosmos DB 支持所有基本 Gremlin 步骤。The best workaround is to rewrite the required Gremlin steps with the supported functionality, since all essential Gremlin steps are supported by Azure Cosmos DB.

为什么我收到“WebSocketException:当状态代码应为‘101’时,服务器返回了状态代码‘200’”错误?Why am I getting the "WebSocketException: The server returned status code '200' when status code '101' was expected" error?

正在使用不正确的终结点时,可能会引发此错误。This error is likely thrown when the wrong endpoint is being used. 导致出现此错误的终结点采用以下模式:The endpoint that generates this error has the following pattern:

https:// YOUR_DATABASE_ACCOUNT.documents.azure.cn:443/

这是图形数据库的文档终结点。This is the documents endpoint for your graph database. 要使用的正确终结点是 Gremlin 终结点,它采用以下格式:The correct endpoint to use is the Gremlin Endpoint, which has the following format:


为什么会收到“RequestRateIsTooLarge”错误?Why am I getting the "RequestRateIsTooLarge" error?

此错误表示,每秒分配的请求单位不足,无法为查询提供服务。This error means that the allocated Request Units per second aren't enough to serve the query. 此错误通常在运行获取所有顶点的查询时出现:This error is usually seen when you run a query that obtains all vertices:

// Query example:

此查询将尝试检索图形中的所有顶点。This query will attempt to retrieve all vertices from the graph. 因此,此查询的成本将至少等于以 RU 表示的顶点数。So, the cost of this query will be equal to at least the number of vertices in terms of RUs. 应调整每秒 RU 数目的设置以满足此查询的需要。The RU/s setting should be adjusted to address this query.

为何我的 Gremlin 驱动程序连接最终被删除了?Why do my Gremlin driver connections get dropped eventually?

Gremlin 连接是通过 WebSocket 连接进行的。A Gremlin connection is made through a WebSocket connection. 尽管 WebSocket 连接没有特定生存时间,但 Azure Cosmos DB Gremlin API 将在空闲连接处于非活动状态 30 分钟后对其进行终止。Although WebSocket connections don't have a specific time to live, Azure Cosmos DB Gremlin API will terminate idle connections after 30 minutes of inactivity.

为什么在本机 Gremlin 驱动程序中不能使用 Fluent API 调用?Why can't I use fluent API calls in the native Gremlin drivers?

Fluent API 调用尚不受 Azure Cosmos DB Gremlin API 支持。Fluent API calls aren't yet supported by the Azure Cosmos DB Gremlin API. Fluent API 调用需要一种称为字节码支持的内部格式设置功能,Azure Cosmos DB Gremlin API 目前不支持此功能。Fluent API calls require an internal formatting feature known as bytecode support that currently isn't supported by Azure Cosmos DB Gremlin API. 由于同一原因,最新的 Gremlin-JavaScript 驱动程序当前也不受支持。Due to the same reason, the latest Gremlin-JavaScript driver is also currently not supported.

如何评估 Gremlin 查询的效率?How can I evaluate the efficiency of my Gremlin queries?

ExecutionProfile() 预览步骤可用于提供查询执行计划的分析。The executionProfile() preview step can be used to provide an analysis of the query execution plan. 此步骤需要添加到任何 Gremlin 查询的末尾,如以下示例所示:This step needs to be added to the end of any Gremlin query as illustrated by the following example:

查询示例Query example


示例输出Example output

    "gremlin": "g.V('mary').out('knows').executionProfile()",
    "totalTime": 8,
    "metrics": [
        "name": "GetVertices",
        "time": 3,
        "annotations": {
          "percentTime": 37.5
        "counts": {
          "resultCount": 1
        "name": "GetEdges",
        "time": 5,
        "annotations": {
          "percentTime": 62.5
        "counts": {
          "resultCount": 0
        "storeOps": [
            "count": 0,
            "size": 0,
            "time": 0.6
        "name": "GetNeighborVertices",
        "time": 0,
        "annotations": {
          "percentTime": 0
        "counts": {
          "resultCount": 0
        "name": "ProjectOperator",
        "time": 0,
        "annotations": {
          "percentTime": 0
        "counts": {
          "resultCount": 0

上述配置文件的输出显示了获取顶点对象、边对象所花费的时间量,以及工作数据集的大小。The output of the above profile shows how much time is spent obtaining the vertex objects, the edge objects, and the size of the working data set. 这与 Azure Cosmos DB 查询的标准成本度量相关。This is related to the standard cost measurements for Azure Cosmos DB queries.

Cassandra APICassandra API

Azure Cosmso DB Cassandra API 支持哪个协议版本?What is the protocol version supported by Azure Cosmso DB Cassandra API? 是否打算支持其他协议?Is there a plan to support other protocols?

Azure Cosmos DB 的 Apache Cassandra API 目前支持 CQL 版本 4。Apache Cassandra API for Azure Cosmos DB supports today CQL version 4. 如果有与支持其他协议相关的反馈,请通过 Azure 支持部门告知我们。If you have feedback about supporting other protocols, let us know via Azure support.

为何要求选择表的吞吐量?Why is choosing a throughput for a table a requirement?

Azure Cosmos DB 根据表的创建位置(门户或 CQL)设置容器的默认吞吐量。Azure Cosmos DB sets default throughput for your container based on where you create the table from - portal or CQL. Azure Cosmos DB 针对操作设置上限,在性能和延迟方面提供保证。Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operation. 如果引擎可以针对租户的操作实施调控,则可以提供这样的保证。This guarantee is possible when the engine can enforce governance on the tenant's operations. 设置吞吐量可确保在吞吐量和延迟方面获得保障,因为平台会保留此容量,并保证操作成功。Setting throughput ensures that you get the guaranteed throughput and latency, because the platform reserves this capacity and guarantees operation success. 可以弹性更改吞吐量以便使季节性应用受益并节省成本。You can elastically change throughput to benefit from the seasonality of your application and save costs.

Azure Cosmos DB 中的请求单位数一文介绍了吞吐量概念。The throughput concept is explained in the Request Units in Azure Cosmos DB article. 表的吞吐量平均分布到各个基础物理分区中。The throughput for a table is distributed across the underlying physical partitions equally.

通过 CQL 创建的表的默认吞吐量是多少 RU/s?What is the default RU/s of table when created through CQL? 如何更改该默认值?What If I need to change it?

Azure Cosmos DB 使用每秒请求单位数 (RU/s) 作为所提供的吞吐量的单位。Azure Cosmos DB uses request units per second (RU/s) as a currency for providing throughput. 通过 CQL 创建的表具有 400 RU。Tables created through CQL have 400 RU. 可以通过门户更改 RU。You can change the RU from the portal.


CREATE TABLE keyspaceName.tablename (user_id int PRIMARY KEY, lastname text) WITH cosmosdb_provisioned_throughput=1200


int provisionedThroughput = 400;
var simpleStatement = new SimpleStatement($"CREATE TABLE {keyspaceName}.{tableName} (user_id int PRIMARY KEY, lastname text)");
var outgoingPayload = new Dictionary<string, byte[]>();
outgoingPayload["cosmosdb_provisioned_throughput"] = Encoding.UTF8.GetBytes(provisionedThroughput.ToString());

耗尽吞吐量时会发生什么情况?What happens when throughput is used up?

Azure Cosmos DB 针对操作设置上限,在性能和延迟方面提供保证。Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operation. 如果引擎可以针对租户的操作实施调控,则可以提供这样的保证。This guarantee is possible when the engine can enforce governance on the tenant's operations. 这可以通过设置吞吐量来实现,并可确保在吞吐量和延迟方面获得保障,因为平台会保留此容量,并保证操作成功。This is possible based on setting the throughput, which ensures that you get the guaranteed throughput and latency, because platform reserves this capacity and guarantees operation success. 当超出此容量时,会收到超载错误消息,指出已耗尽容量。When you go over this capacity, you get overloaded error message indicating your capacity was used up. 0x1001 超载:无法处理此请求,因为“请求速率太大”。0x1001 Overloaded: the request can't be processed because "Request Rate is large". 此时,必须查明是哪些操作及其数据量导致了此问题。At this juncture, it's essential to see what operations and their volume causes this issue. 可以通过门户中的指标了解超出了预配容量的已消耗容量。You can get an idea about consumed capacity going over the provisioned capacity with metrics on the portal. 然后,你需要确保容量差不多是在所有基础分区中平均消耗的。Then you need to ensure capacity is consumed nearly equally across all underlying partitions. 如果你看到大多数吞吐量是由一个分区消耗的,则说明存在工作负载倾斜。If you see most of the throughput is consumed by one partition, you have skew of workload.

相关指标显示了吞吐量在若干小时内、若干天内以及每七天内在各个分区中的使用情况或总体使用情况。Metrics are available that show you how throughput is used over hours, days, and per seven days, across partitions or in aggregate. 有关详细信息,请参阅使用 Azure Cosmos DB 中的指标进行监视和调试For more information, see Monitoring and debugging with metrics in Azure Cosmos DB.

Azure Cosmos DB 诊断日志记录一文中介绍了诊断日志。Diagnostic logs are explained in the Azure Cosmos DB diagnostic logging article.

主键是否映射到 Azure Cosmos DB 的分区键概念?Does the primary key map to the partition key concept of Azure Cosmos DB?

是,分区键用来将实体放置在正确位置。Yes, the partition key is used to place the entity in right location. 在 Azure Cosmos DB 中,它用来查找存储在物理分区中的正确逻辑分区。In Azure Cosmos DB, it's used to find right logical partition that's stored on a physical partition. 在 Azure Cosmos DB 中分区和缩放一文中很好地解释了分区概念。The partitioning concept is well explained in the Partition and scale in Azure Cosmos DB article. 此处,必须记住的一点是,逻辑分区目前不应当超出 10-GB 限制。The essential take away here is that a logical partition shouldn't go over the 10-GB limit today.

当收到指示分区已满的“配额已满”通知时,会发生什么情况?What happens when I get a quota full" notification indicating that a partition is full?

Azure Cosmos DB 是基于 SLA 的系统,可提供无限缩放,并在延迟、吞吐量、可用性和一致性方面提供保障。Azure Cosmos DB is a SLA-based system that provides unlimited scale, with guarantees for latency, throughput, availability, and consistency. 此无限制的存储是通过使用分区作为键概念的数据水平横向扩展实现的。This unlimited storage is based on horizontal scale out of data using partitioning as the key concept. 在 Azure Cosmos DB 中分区和缩放一文中很好地解释了分区概念。The partitioning concept is well explained in the Partition and scale in Azure Cosmos DB article.

应当遵循每个逻辑分区的实体数或项数不超过 10 GB 的限制。The 10-GB limit on the number of entities or items per logical partition you should adhere to. 为确保应用程序能够很好地进行缩放,建议不要创建热分区,即,将所有信息存储在一个分区内并查询它。To ensure that your application scales well, we recommend that you not create a hot partition by storing all information in one partition and querying it. 只有当存在数据倾斜时,也就是说,当一个分区键有大量数据(超过 10 GB)时,才会发生此错误。This error can only come if your data is skewed: that is, you have lot of data for one partition key (more than 10 GB). 可以使用存储门户查明数据的分布。You can find the distribution of data using the storage portal. 若要修复此错误,请重新创建表并选择一个细粒度的主键(分区键),这可以实现更好的数据分布。Way to fix this error is to recreate the table and choose a granular primary (partition key), which allows better distribution of data.

是否可以将 Cassandra API 作为键值存储用于数百万或数十亿单独的分区键?Is it possible to use Cassandra API as key value store with millions or billions of individual partition keys?

Azure Cosmos DB 可以通过对存储进行横向扩展来存储无限的数据。Azure Cosmos DB can store unlimited data by scaling out the storage. 这与吞吐量无关。This is independent of the throughput. 你始终可以仅使用 Cassandra API 通过指定正确的主键/分区键来存储和检索键/值。Yes you can always just use Cassandra API to store and retrieve key/values by specifying right primary/partition key. 这些单独的键获取其自己的逻辑分区并存在于物理分区之上,且不会出现问题。These individual keys get their own logical partition and sit atop physical partition without issues.

是否可以使用 Azure Cosmos DB 的 Apache Cassandra API 创建多个表?Is it possible to create more than one table with Apache Cassandra API of Azure Cosmos DB?

是的,可以使用 Apache Cassandra API 创建多个表。Yes, it's possible to create more than one table with Apache Cassandra API. 这些表中的每一个都被视为用于吞吐量和存储的单元。Each of those tables is treated as unit for throughput and storage.

是否可以连续创建多个表?Is it possible to create more than one table in succession?

Azure Cosmos DB 是资源调控的系统,适用于数据和控制平面活动。Azure Cosmos DB is resource governed system for both data and control plane activities. 与集合和表一样,容器是针对给定的吞吐量容量预配的运行时实体。Containers like collections, tables are runtime entities that are provisioned for given throughput capacity. 快速连续创建这些容器是非预期的活动,会被限制。The creation of these containers in quick succession isn't expected activity and throttled. 如果存在立即删除/创建表的测试,请尽量将它们隔开。If you have tests that drop/create tables immediately, try to space them out.

最多可以创建几个表?What is maximum number of tables that can be created?

表数目没有物理限制。如果需要创建数量很多(远远超过平常的数十个或数百个)的表(数据总大小始终超过 10 TB),请联系 Azure 支持部门There's no physical limit on number of tables, contact Azure support if you have large number of tables (where the total steady size goes over 10 TB of data) that need to be created from usual 10s or 100s.

最多可以创建多少个键空间?What is the maximum # of keyspace that we can create?

键空间数目没有物理限制,因为它们是元数据容器。如果由于某种原因,你有大量的键空间,请联系 Azure 支持部门There's no physical limit on number of keyspaces as they're metadata containers, contact Azure support if you have large number of keyspaces for some reason.

从普通的表启动后,是否可以引入大量数据?Is it possible to bring in lot of data after starting from normal table?

存储容量是自动管理的,推入更多的数据时,存储容量将会增大。The storage capacity is automatically managed and increases as you push in more data. 因此,您可以放心地导入所需数量的数据,不需要管理和预配节点等等。So you can confidently import as much data as you need without managing and provisioning nodes, and more.

是否可以提供 yaml 文件设置来配置 Azure Cosmos DB 的 Apache Casssandra API 行为?Is it possible to supply yaml file settings to configure Apache Casssandra API of Azure Cosmos DB behavior?

Azure Cosmos DB 的 Apache Cassandra API 是一项平台服务。Apache Cassandra API of Azure Cosmos DB is a platform service. 它提供协议级的兼容性来执行操作。It provides protocol level compatibility for executing operations. 它消除了管理、监视和配置的复杂性。It hides away the complexity of management, monitoring, and configuration. 开发人员/用户无需担心可用性、逻辑删除、键缓存、行缓存、布隆筛选器和其他许多设置。As a developer/user, you don't need to worry about availability, tombstones, key cache, row cache, bloom filter, and multitude of other settings. Azure Cosmos DB 的 Apache Cassandra API 注重于提供所需的读取和写入性能,不会产生配置和管理开销。Azure Cosmos DB's Apache Cassandra API focuses on providing read and write performance that you require without the overhead of configuration and management.

Azure Cosmos DB 的 Apache Cassandra API 是否支持节点添加/群集状态/节点状态命令?Will Apache Cassandra API for Azure Cosmos DB support node addition/cluster status/node status commands?

Apache Cassandra API 是一项平台服务,使用它可以轻松做出容量规划,以及应对吞吐量和存储方面的弹性需求。Apache Cassandra API is a platform service that makes capacity planning, responding to the elasticity demands for throughput & storage a breeze. 借助 Azure Cosmos DB,可以预配所需的吞吐量。With Azure Cosmos DB you provision throughput, you need. 然后,可以在一整天中多次上调和下调吞吐量,而无需担心如何添加/删除或管理节点。Then you can scale it up and down any number of times through the day without worrying about adding/deleting nodes or managing them. 这意味着也不需要使用节点、群集管理工具。This implies you don't need to use the node, cluster management tool too.

有关创建键空间的各项配置设置(例如简单/网络)会是怎样的?What happens with respect to various config settings for keyspace creation like simple/network?

出于可用性和低延迟的原因,Azure Cosmos DB 提供现成的多区域分布。Azure Cosmos DB provides multiple-region distribution out of the box for availability and low latency reasons. 不需要设置副本或其他内容。You don't need to setup replicas or other things. 所有写入均在你进行写入的区域中经过仲裁永久认可,同时提供性能保障。All writes are always durably quorum committed in any region where you write while providing performance guarantees.

布隆筛选器、缓存、读取修复更改、gc_grace 和 compression memtable_flush_period 等表元数据的各项设置有什么变化?What happens with respect to various settings for table metadata like bloom filter, caching, read repair change, gc_grace, compression memtable_flush_period, and more?

Azure Cosmos DB 为读/写操作提供性能和吞吐量,无需更改任何配置设置,也不会意外处理这些设置。Azure Cosmos DB provides performance for reads/writes and throughput without need for touching any of the configuration settings and accidentally manipulating them.

Cassandra 表是否支持生存时间 (TTL)?Is time-to-live (TTL) supported for Cassandra tables?

是的,支持 TTL。Yes, TTL is supported.

以前是否可以使用各种工具监视节点状态、副本状态、gc 和 OS 参数?Is it possible to monitor node status, replica status, gc, and OS parameters earlier with various tools? 现在需要监视什么?What needs to be monitored now?

Azure Cosmos DB 是一个平台服务,可帮助你提高工作效率,而无需担心如何管理和监视基础结构。Azure Cosmos DB is a platform service that helps you increase productivity and not worry about managing and monitoring infrastructure. 只需在门户指标中关注可用的吞吐量,以查明你是否受到限制并增大或减小该吞吐量。You just need to take care of throughput that's available on portal metrics to find if you're getting throttled and increase or decrease that throughput. 监视 SLAMonitor SLAs. 使用指标 使用诊断日志Use Metrics Use Diagnostic logs.

哪些客户端 SDK 适用于 Azure Cosmos DB 的 Apache Cassandra API?Which client SDKs can work with Apache Cassandra API of Azure Cosmos DB?

Apache Cassandra SDK 的使用 CQLv3 的客户端驱动程序用于客户端程序。Apache Cassandra SDK's client drivers that use CQLv3 were used for client programs. 如果使用其他驱动程序或者遇到问题,请联系 Azure 支持部门If you have other drivers that you use or if you're facing issues, contact Azure support.

是否支持复合分区键?Is composite partition key supported?

是的,可以使用正则语法创建复合分区键。Yes, you can use regular syntax to create composite partition key.

是否可以使用 stableloader 加载数据?Can I use stableloader for data loading?

不可以,不支持 stableloader。No, stableloader isn't supported.

本地 Apache Cassandra 群集是否可与 Azure Cosmos DB 的 Cassandra API 配对?Can an on-premises Apache Cassandra cluster be paired with Azure Cosmos DB's Cassandra API?

目前,Azure Cosmos DB 针对云环境提供了优化的体验,且不产生操作开销。At present Azure Cosmos DB has an optimized experience for cloud environment without overhead of operations. 如果需要配对,请联系 Azure 支持部门并提供方案说明。If you require pairing, contact Azure support with a description of your scenario. 我们正致力于提供一项服务,以便帮助用户将本地的 Cassandra 群集或不同的云 Cassandra 群集与 Cosomos DB 的 Cassandra API 配对。We are working on offering to help pair the on-premises/different cloud Cassandra cluster to Cosomos DB's Cassandra API.

Cassandra API 是否提供完整备份?Does Cassandra API provide full backups?

Azure Cosmos DB 的所有 API 目前都提供间隔四小时的两个免费完整备份。Azure Cosmos DB provides two free full backups taken at four hours interval today across all APIs. 这可确保你不需要设置备份计划和其他配置。This ensures you don't need to set up a backup schedule and other things. 如果希望修改保留期和频率,请联系 Azure 支持部门或提交支持案例。If you want to modify retention and frequency, contact Azure Support or raise a support case. Azure Cosmos DB 的自动联机备份和还原一文中提供了有关备份功能的信息。Information about backup capability is provided in the Automatic online backup and restore with Azure Cosmos DB article.

当某个区域出现故障时,Cassandra API 帐户如何处理故障转移?How does the Cassandra API account handle failover if a region goes down?

Azure Cosmos DB Cassandra API 借助 Azure Cosmos DB 的多区域分布式平台。The Azure Cosmos DB Cassandra API borrows from the multiple-regionally distributed platform of Azure Cosmos DB. 若要确保应用程序能够容许数据中心停机,请在 Azure Cosmos DB 门户中为帐户额外启用至少一个区域。详见使用多区域 Azure Cosmos DB 帐户进行开发To ensure that your application can tolerate datacenter downtime, enable at least one more region for the account in the Azure Cosmos DB portal Developing with multi-region Azure Cosmos DB accounts. 可以使用门户设置区域的优先级。请参阅使用多区域 Azure Cosmos DB 帐户进行开发You can set the priority of the region by using the portal Developing with multi-region Azure Cosmos DB accounts.

可以视需要为帐户添加任意数目的区域,并通过提供故障转移优先级来控制可将该帐户故障转移到哪个位置。You can add as many regions as you want for the account and control where it can fail over to by providing a failover priority. 若要使用数据库,还需要在那里提供一个应用程序。To use the database, you need to provide an application there too. 这样,客户就不会遇到停机情况。When you do so, your customers won't experience downtime.

Apache Cassandra API 是否默认对实体的所有属性编制索引?Does the Apache Cassandra API index all attributes of an entity by default?

Cassandra API 计划支持辅助索引,以帮助在某些属性上创建选择性索引。Cassandra API is planning to support Secondary indexing to help create selective index on certain attributes.

是否可以在本地将新的 Cassandra API SDK 用于模拟器?Can I use the new Cassandra API SDK locally with the emulator?

可以,支持该操作。Yes this is supported.

平台形式的 Azure Cosmos DB 似乎拥有许多功能,例如,更改源和其他功能。Azure Cosmos DB as a platform seems to have lot of capabilities, such as change feed and other functionality. 这些功能是否将添加到 Cassandra API 中?Will these capabilities be added to the Cassandra API?

Apache Cassandra API 提供了与 Apache Cassandra 相同的 CQL 功能。The Apache Cassandra API provides the same CQL functionality as Apache Cassandra. 我们确实打算研究在将来支持各种功能的可行性。We do plan to look into feasibility of supporting various capabilities in future.

常规 Cassandra API 的功能 x 目前不工作,可以在哪里提供反馈?Feature x of regular Cassandra API isn't working as today, where can the feedback be provided?

请通过 UserVoice 反馈提供反馈。Provide feedback via user voice feedback.