Service Fabric Reliable Services 分区Partition Service Fabric reliable services

本文介绍 Azure Service Fabric Reliable Services 分区的基本概念。This article provides an introduction to the basic concepts of partitioning Azure Service Fabric reliable services. 本文中使用的源代码也可以在 GitHub上获取。The source code used in the article is also available on GitHub.

分区Partitioning

分区并不是 Service Fabric 所独有的。Partitioning is not unique to Service Fabric. 事实上,它是生成可缩放服务的核心模式。In fact, it is a core pattern of building scalable services. 从更广泛的意义来说,可将分区视为将状态(数据)和计算划分为更小的可访问单元,以提高可伸缩性和性能的一种概念。In a broader sense, we can think about partitioning as a concept of dividing state (data) and compute into smaller accessible units to improve scalability and performance. 数据分区是一种众所周知的分区形式,也称为分片。A well-known form of partitioning is data partitioning, also known as sharding.

Service Fabric 无状态服务分区Partition Service Fabric stateless services

对于无状态服务,可以将分区视为包含服务的一个或多个实例的逻辑单元。For stateless services, you can think about a partition being a logical unit that contains one or more instances of a service. 图 1 显示一个无状态服务,其五个实例使用一个分区在群集中分布。Figure 1 shows a stateless service with five instances distributed across a cluster using one partition.

无状态服务

实际上有两种类型的无状态服务解决方案。There are really two types of stateless service solutions. 第一种是在外部(例如在 Azure SQL 数据库中的数据库中)保持其状态的服务(如存储会话信息和数据的网站)。The first one is a service that persists its state externally, for example in a database in Azure SQL Database (like a website that stores the session information and data). 第二种是不管理任何持久状态的仅计算服务(如计算器或图像缩略)。The second one is computation-only services (like a calculator or image thumbnailing) that do not manage any persistent state.

在任一情况下,对无状态服务进行分区都是非常少见的方案 — 通常通过添加更多实例实现可伸缩性和可用性。In either case, partitioning a stateless service is a very rare scenario--scalability and availability are normally achieved by adding more instances. 对于无状态服务实例要考虑多个分区的唯一情况是在需要满足特殊路由请求时。The only time you want to consider multiple partitions for stateless service instances is when you need to meet special routing requests.

例如,考虑以下这种情况:ID 处于特定范围内的用户只应该由特定服务实例提供服务。As an example, consider a case where users with IDs in a certain range should only be served by a particular service instance. 可对无状态服务进行分区的情况的另一个示例是在用户具有真正分区的后端(例如 SQL 数据库中的分片数据库)并且要控制哪个服务实例应写入数据库分片(或是在无状态服务中执行的其他准备工作需要的分区信息与后端中使用的信息相同)时。Another example of when you could partition a stateless service is when you have a truly partitioned backend (e.g. a sharded database in SQL Database) and you want to control which service instance should write to the database shard--or perform other preparation work within the stateless service that requires the same partitioning information as is used in the backend. 这些类型的情况也可以通过其他方式进行解决,并不一定需要服务分区。Those types of scenarios can also be solved in different ways and do not necessarily require service partitioning.

本演练的其余部分侧重于有状态服务。The remainder of this walkthrough focuses on stateful services.

Service Fabric 有状态服务分区Partition Service Fabric stateful services

通过 Service Fabric 可以提供到一流的状态(数据)分区方式,从而方便地开发可缩放有状态服务。Service Fabric makes it easy to develop scalable stateful services by offering a first-class way to partition state (data). 从概念上讲,可以将有状态服务的分区视为使用在群集中的节点间进行分布和平衡的副本而具有高度可靠性的缩放单元。Conceptually, you can think about a partition of a stateful service as a scale unit that is highly reliable through replicas that are distributed and balanced across the nodes in a cluster.

在 Service Fabric 有状态服务的上下文中进行分区是指确定特定服务分区负责服务完整状态的某个部分的过程。Partitioning in the context of Service Fabric stateful services refers to the process of determining that a particular service partition is responsible for a portion of the complete state of the service. (如前所述,分区是一组副本)。(As mentioned before, a partition is a set of replicas). Service Fabric 的一大优点是它将分区置于不同节点上。A great thing about Service Fabric is that it places the partitions on different nodes. 这使它们可以按照节点的资源限制增长。This allows them to grow to a node's resource limit. 随着数据需求的增长,分区也会增长,Service Fabric 会在节点间重新平衡分区。As the data needs grow, partitions grow, and Service Fabric rebalances partitions across nodes. 这可确保硬件资源的持续高效使用。This ensures the continued efficient use of hardware resources.

例如,假设开始时拥有一个 5 节点群集,以及一个配置为具有 10 个分区并且目标为 3 个副本的服务。To give you an example, say you start with a 5-node cluster and a service that is configured to have 10 partitions and a target of three replicas. 在这种情况下,Service Fabric 会在群集间均衡和分布副本 — 最后每个节点会有两个主副本In this case, Service Fabric would balance and distribute the replicas across the cluster--and you would end up with two primary replicas per node. 如果现在需要将群集扩大到 10 个节点,则 Service Fabric 会在所有 10 个节点间重新均衡主副本If you now need to scale out the cluster to 10 nodes, Service Fabric would rebalance the primary replicas across all 10 nodes. 同样,如果重新缩小为 5 个节点,则 Service Fabric 会在 5 个节点间重新平衡所有副本。Likewise, if you scaled back to 5 nodes, Service Fabric would rebalance all the replicas across the 5 nodes.

图 2 显示缩放群集之前和之后的 10 个分区的分布。Figure 2 shows the distribution of 10 partitions before and after scaling the cluster.

有状态服务

这样,便因为来自客户端的请求在计算机间进行分布而实现了扩大,提高了应用程序的整体性能,并减少了对数据区块的访问争用。As a result, the scale-out is achieved since requests from clients are distributed across computers, overall performance of the application is improved, and contention on access to chunks of data is reduced.

规划分区Plan for partitioning

实现服务之前,应始终考虑扩大所需的分区策略。可使用不同方式,但所有方式都注重应用程序需要实现的功能。Before implementing a service, you should always consider the partitioning strategy that is required to scale out. There are different ways, but all of them focus on what the application needs to achieve. 由于本文的背景,我们来考虑一些更重要的方面。For the context of this article, let's consider some of the more important aspects.

一个不错的方法是需要进行分区的状态的结构视为第一步。A good approach is to think about the structure of the state that needs to be partitioned, as the first step.

我们来看一个简单的示例。Let's take a simple example. 如果要为全国投票生成一个服务,则可以为该国家/地区中的每个城市创建一个分区。If you were to build a service for a county-wide poll, you could create a partition for each city in the county. 随后可以在对应于该城市的分区中为城市中的每个人存储投票。Then, you could store the votes for every person in the city in the partition that corresponds to that city. 图 3 显示一组人及其所在的城市。Figure 3 illustrates a set of people and the city in which they reside.

简单分区

因为城市的人口差别很大,所以最后可能出现这样的状态:一些分区包含大量数据(例如 Seattle),而其他分区包含的数据非常少(例如 Kirkland)。As the population of cities varies widely, you may end up with some partitions that contain a lot of data (e.g. Seattle) and other partitions with very little state (e.g. Kirkland). 那么分区具有不均匀的状态数量会有什么影响呢?So what is the impact of having partitions with uneven amounts of state?

如果再次考虑该示例,便可以很容易地发现为Seattle保存投票的分区获得的流量会多于Kirkland的相应分区。If you think about the example again, you can easily see that the partition that holds the votes for Seattle will get more traffic than the Kirkland one. 默认情况下,Service Fabric 可确保每个节点上的主副本和辅助副本数量大致相同。By default, Service Fabric makes sure that there is about the same number of primary and secondary replicas on each node. 因此,最后可能节点容纳的一些副本处理较多流量,而其他副本处理较少流量。So you may end up with nodes that hold replicas that serve more traffic and others that serve less traffic. 会倾向于避免群集中出现类似于这种情况的热点和冷点。You would preferably want to avoid hot and cold spots like this in a cluster.

为避免出现这种情况,从分区的角度来看,应做两件事:In order to avoid this, you should do two things, from a partitioning point of view:

  • 尝试对状态进行分区,以便状态在所有分区间均匀分布。Try to partition the state so that it is evenly distributed across all partitions.
  • 从服务的每个副本报告负载。Report load from each of the replicas for the service. (有关操作方法的信息,请查看这篇有关指标和负载的文章)。(For information on how, check out this article on Metrics and Load). Service Fabric 可以报告服务消耗的负载,例如内存量或记录数。Service Fabric provides the capability to report load consumed by services, such as amount of memory or number of records. 根据报告的指标,Service Fabric 会检测到某些分区处理的负载高于其他分区,并通过将副本移动到更合适的节点来重新平衡群集,以便在整体上不会有节点超载。Based on the metrics reported, Service Fabric detects that some partitions are serving higher loads than others and rebalances the cluster by moving replicas to more suitable nodes, so that overall no node is overloaded.

有时,无法知道将处于给定分区中的数据量。Sometimes, you cannot know how much data will be in a given partition. 因此,常规建议是执行以下两种操作:首先是采用在分区间均匀分布数据的分区策略,其次是报告负载。So a general recommendation is to do both--first, by adopting a partitioning strategy that spreads the data evenly across the partitions and second, by reporting load. 第一种方法可防止投票示例中描述的情况,而第二种方法可帮助随时间推移而消除访问或负载的中的临时差异。The first method prevents situations described in the voting example, while the second helps smooth out temporary differences in access or load over time.

分区规划的另一个方面是选择开始时要采用的正确分区数。Another aspect of partition planning is to choose the correct number of partitions to begin with. 从 Service Fabric 角度来看,可以毫无阻碍地在一开始便使用比针对方案所预期的分区更多的分区。From a Service Fabric perspective, there is nothing that prevents you from starting out with a higher number of partitions than anticipated for your scenario. 事实上,采用最大数量的分区是一种有效方法。In fact, assuming the maximum number of partitions is a valid approach.

在极少数情况下,可能最终需要比最初选择更多的分区。In rare cases, you may end up needing more partitions than you have initially chosen. 因为无法在事后更改分区计数,所以需要应用一些高级分区方法,如创建相同服务类型的新服务实例。As you cannot change the partition count after the fact, you would need to apply some advanced partition approaches, such as creating a new service instance of the same service type. 还需要实现某种可基于客户端代码必须维护的客户端知识,将请求路由到正确服务实例的客户端逻辑。You would also need to implement some client-side logic that routes the requests to the correct service instance, based on client-side knowledge that your client code must maintain.

分区规划的另一个注意事项是可用计算机资源。Another consideration for partitioning planning is the available computer resources. 由于需要访问和存储状态,因此你一定会遵循以下各项:As the state needs to be accessed and stored, you are bound to follow:

  • 网络带宽限制Network bandwidth limits
  • 系统内存限制System memory limits
  • 磁盘存储限制Disk storage limits

那么,如果在正在运行的群集中遇到资源限制时会发生什么情况呢?So what happens if you run into resource constraints in a running cluster? 答案是可以只需扩大群集以适应新需求。The answer is that you can simply scale out the cluster to accommodate the new requirements.

容量规划指南提供有关如何确定群集需要的节点数的指导。The capacity planning guide offers guidance for how to determine how many nodes your cluster needs.

开始进行分区Get started with partitioning

本部分介绍如何开始对服务进行分区。This section describes how to get started with partitioning your service.

Service Fabric 提供了三个分区方案可供选择:Service Fabric offers a choice of three partition schemes:

  • 按范围分区(也称为 UniformInt64Partition)。Ranged partitioning (otherwise known as UniformInt64Partition).
  • 命名分区。Named partitioning. 使用此模型的应用程序通常具有可以在受限集中分段的数据。Applications using this model usually have data that can be bucketed, within a bounded set. 用作命名分区键的数据字段的一些常见示例是区域、邮政编码、客户组或其他业务边界。Some common examples of data fields used as named partition keys would be regions, postal codes, customer groups, or other business boundaries.
  • 单独分区。Singleton partitioning. 单独分区通常在服务不需要任何其他路由时使用。Singleton partitions are typically used when the service does not require any additional routing. 例如,无状态服务在默认情况下使用此分区方案。For example, stateless services use this partitioning scheme by default.

命名和单独分区方案是范围分区的特殊形式。Named and Singleton partitioning schemes are special forms of ranged partitions. 默认情况下,用于 Service Fabric 的 Visual Studio 模板会使用范围分区,因为它是最常用且最有用的分区。By default, the Visual Studio templates for Service Fabric use ranged partitioning, as it is the most common and useful one. 本文的其余部分重点介绍范围分区方案。The remainder of this article focuses on the ranged partitioning scheme.

范围分区方案Ranged partitioning scheme

此方案用于指定整数范围(由低键和高键标识)和分区数目 (n)。This is used to specify an integer range (identified by a low key and high key) and a number of partitions (n). 它会创建 n 个分区,每个分区负责整个分区键范围的未重叠子范围。It creates n partitions, each responsible for a non-overlapping subrange of the overall partition key range. 例如,一个采用低键 0、高键 99 和计数 4 的范围分区方案会创建如下所示的 4 个分区。For example, a ranged partitioning scheme with a low key of 0, a high key of 99, and a count of 4 would create four partitions, as shown below.

范围分区

一种常见方法是基于数据集中的唯一键创建哈希。A common approach is to create a hash based on a unique key within the data set. 一些常见的键示例有:车辆识别号 (VIN)、员工 ID 或唯一字符串。Some common examples of keys would be a vehicle identification number (VIN), an employee ID, or a unique string. 随后使用此唯一键生成一个哈希代码(键范围取模)以用作键。By using this unique key, you would then generate a hash code, modulus the key range, to use as your key. 可以指定所允许键范围的上限和下限。You can specify the upper and lower bounds of the allowed key range.

选择哈希算法Select a hash algorithm

哈希法的重要部分是选择哈希算法。An important part of hashing is selecting your hash algorithm. 一个考虑事项是:目标是否是对相邻的类似键进行分组(局部敏感哈希法)— 或者活动是否应广泛分布在所有分区上(分发哈希法),后者更加常见。A consideration is whether the goal is to group similar keys near each other (locality sensitive hashing)--or if activity should be distributed broadly across all partitions (distribution hashing), which is more common.

良好的分发哈希算法的特征是易于计算,几乎没有冲突,并且均匀地分发键。The characteristics of a good distribution hashing algorithm are that it is easy to compute, it has few collisions, and it distributes the keys evenly. 高效的哈希算法的一个很好示例是 FNV-1 哈希算法。A good example of an efficient hash algorithm is the FNV-1 hash algorithm.

生成具有多个分区的有状态服务Build a stateful service with multiple partitions

让我们创建具有多个分区的第一个可靠有状态服务。Let's create your first reliable stateful service with multiple partitions. 在此示例中,会生成一个非常简单的应用程序,在其中要以相同字母开头的所有姓氏存储在相同分区中。In this example, you will build a very simple application where you want to store all last names that start with the same letter in the same partition.

编写任何代码之前,需要考虑分区和分区键。Before you write any code, you need to think about the partitions and partition keys. 需要 26 个分区(字母表中的每个字母各一个分区),但是低键和高键是怎样的呢?You need 26 partitions (one for each letter in the alphabet), but what about the low and high keys? 因为我们确实是对每个字母使用一个分区,所以可以使用 0 作为低键,使用 25 作为高键,因为每个字母都是自己的键。As we literally want to have one partition per letter, we can use 0 as the low key and 25 as the high key, as each letter is its own key.

备注

这是简化方案,因为在现实情况下分布是不均匀的。This is a simplified scenario, as in reality the distribution would be uneven. 以字母“S”或“M”开头的姓氏比以“X”或“Y”开头的姓氏更常见。Last names starting with the letters "S" or "M" are more common than the ones starting with "X" or "Y".

  1. 打开“Visual Studio” > “文件” > “新建” > “项目” 。Open Visual Studio > File > New > Project.

  2. 在“新建项目”对话框中,选择 Service Fabric 应用程序。In the New Project dialog box, choose the Service Fabric application.

  3. 将项目命名为“AlphabetPartitions”。Call the project "AlphabetPartitions".

  4. 在“创建服务”对话框中,选择“有状态”服务并将它称为“Alphabet.Processing”。In the Create a Service dialog box, choose Stateful service and call it "Alphabet.Processing".

  5. 设置分区数。Set the number of partitions. 打开 AlphabetPartitions 项目的 ApplicationPackageRoot 文件夹中的 Applicationmanifest.xml 文件,然后将参数 Processing_PartitionCount 更新为 26,如下所示。Open the Applicationmanifest.xml file located in the ApplicationPackageRoot folder of the AlphabetPartitions project and update the parameter Processing_PartitionCount to 26 as shown below.

    <Parameter Name="Processing_PartitionCount" DefaultValue="26" />
    

    还需要更新 ApplicationManifest.xml 中 StatefulService 元素的 LowKey 和 HighKey 属性,如下所示。You also need to update the LowKey and HighKey properties of the StatefulService element in the ApplicationManifest.xml as shown below.

    <Service Name="Processing">
      <StatefulService ServiceTypeName="ProcessingType" TargetReplicaSetSize="[Processing_TargetReplicaSetSize]" MinReplicaSetSize="[Processing_MinReplicaSetSize]">
        <UniformInt64Partition PartitionCount="[Processing_PartitionCount]" LowKey="0" HighKey="25" />
      </StatefulService>
    </Service>
    
  6. 要使服务可以访问,请通过添加 Alphabet.Processing 服务的 ServiceManifest.xml(位于 PackageRoot 文件夹中)的终结点元素,在某个端口上打开终结点,如下所示:For the service to be accessible, open up an endpoint on a port by adding the endpoint element of ServiceManifest.xml (located in the PackageRoot folder) for the Alphabet.Processing service as shown below:

    <Endpoint Name="ProcessingServiceEndpoint" Port="8089" Protocol="http" Type="Internal" />
    

    现在服务已配置为侦听具有 26 个分区的内部终结点。Now the service is configured to listen to an internal endpoint with 26 partitions.

  7. 接下来,需要重写 Processing 类的 CreateServiceReplicaListeners() 方法。Next, you need to override the CreateServiceReplicaListeners() method of the Processing class.

    备注

    对于此示例,我们假定使用一个简单 HttpCommunicationListener。For this sample, we assume that you are using a simple HttpCommunicationListener. 有关 Reliable Service 通信的详细信息,请参阅 Reliable Service 通信模型For more information on reliable service communication, see The Reliable Service communication model.

  8. 副本所侦听的 URL 的建议格式为以下格式: {scheme}://{nodeIp}:{port}/{partitionid}/{replicaid}/{guid}A recommended pattern for the URL that a replica listens on is the following format: {scheme}://{nodeIp}:{port}/{partitionid}/{replicaid}/{guid}. 因此,要将通信侦听器配置为侦听正确的终结点以及使用此模式。So you want to configure your communication listener to listen on the correct endpoints and with this pattern.

    可以在同一台计算机上托管此服务的多个副本,因此此地址需要是副本独有的。Multiple replicas of this service may be hosted on the same computer, so this address needs to be unique to the replica. 这就是 URL 中包含分区 ID 和副本 ID 的原因。This is why partition ID + replica ID are in the URL. HttpListener 可以在同一端口上侦听多个地址,前提是 URL 前缀是唯一的。HttpListener can listen on multiple addresses on the same port as long as the URL prefix is unique.

    额外 GUID 在其中用于辅助副本也针对只读请求进行侦听的高级情况。The extra GUID is there for an advanced case where secondary replicas also listen for read-only requests. 如果是这种情况,则要确保在从主副本转换为辅助副本时使用新的唯一地址,强制客户端重新解析地址。When that's the case, you want to make sure that a new unique address is used when transitioning from primary to secondary to force clients to re-resolve the address. “+”在此处用作地址,以便副本在所有可用主机(IP、FQDN、localhost 等)上进行侦听下面的代码演示一个示例。'+' is used as the address here so that the replica listens on all available hosts (IP, FQDN, localhost, etc.) The code below shows an example.

    protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
    {
         return new[] { new ServiceReplicaListener(context => this.CreateInternalListener(context))};
    }
    private ICommunicationListener CreateInternalListener(ServiceContext context)
    {
    
         EndpointResourceDescription internalEndpoint = context.CodePackageActivationContext.GetEndpoint("ProcessingServiceEndpoint");
         string uriPrefix = String.Format(
                "{0}://+:{1}/{2}/{3}-{4}/",
                internalEndpoint.Protocol,
                internalEndpoint.Port,
                context.PartitionId,
                context.ReplicaOrInstanceId,
                Guid.NewGuid());
    
         string nodeIP = FabricRuntime.GetNodeContext().IPAddressOrFQDN;
    
         string uriPublished = uriPrefix.Replace("+", nodeIP);
         return new HttpCommunicationListener(uriPrefix, uriPublished, this.ProcessInternalRequest);
    }
    

    此外,值得注意的是发布的 URL 与侦听 URL 前缀略有不同。It's also worth noting that the published URL is slightly different from the listening URL prefix. 该侦听 URL 提供给 HttpListener。The listening URL is given to HttpListener. 发布的 URL 是发布到 Service Fabric 命名服务(用于服务发现)的 URL。The published URL is the URL that is published to the Service Fabric Naming Service, which is used for service discovery. 客户端会通过该发现服务请求此地址。Clients will ask for this address through that discovery service. 客户端获取的地址需要具有节点的实际 IP 或 FQDN 才能连接。The address that clients get needs to have the actual IP or FQDN of the node in order to connect. 因此需要将“+”替换为节点的 IP 或 FQDN,如上所示。So you need to replace '+' with the node's IP or FQDN as shown above.

  9. 最后一步是将处理逻辑添加到服务,如下所示。The last step is to add the processing logic to the service as shown below.

    private async Task ProcessInternalRequest(HttpListenerContext context, CancellationToken cancelRequest)
    {
        string output = null;
        string user = context.Request.QueryString["lastname"].ToString();
    
        try
        {
            output = await this.AddUserAsync(user);
        }
        catch (Exception ex)
        {
            output = ex.Message;
        }
    
        using (HttpListenerResponse response = context.Response)
        {
            if (output != null)
            {
                byte[] outBytes = Encoding.UTF8.GetBytes(output);
                response.OutputStream.Write(outBytes, 0, outBytes.Length);
            }
        }
    }
    private async Task<string> AddUserAsync(string user)
    {
        IReliableDictionary<String, String> dictionary = await this.StateManager.GetOrAddAsync<IReliableDictionary<String, String>>("dictionary");
    
        using (ITransaction tx = this.StateManager.CreateTransaction())
        {
            bool addResult = await dictionary.TryAddAsync(tx, user.ToUpperInvariant(), user);
    
            await tx.CommitAsync();
    
            return String.Format(
                "User {0} {1}",
                user,
                addResult ? "successfully added" : "already exists");
        }
    }
    

    ProcessInternalRequest 会读取用于调用分区的查询字符串参数值,并调用 AddUserAsync 将姓氏添加到可靠字典 dictionaryProcessInternalRequest reads the values of the query string parameter used to call the partition and calls AddUserAsync to add the lastname to the reliable dictionary dictionary.

  10. 让我们将一个无状态服务添加到项目,以查看如何调用特定分区。Let's add a stateless service to the project to see how you can call a particular partition.

    此服务可用作简单 Web 界面,它接受姓氏作为查询字符串参数,确定分区键,并将它发送到 Alphabet.Processing 服务进行处理。This service serves as a simple web interface that accepts the lastname as a query string parameter, determines the partition key, and sends it to the Alphabet.Processing service for processing.

  11. 在“创建服务”对话框中,选择“无状态”服务并将它称为“Alphabet.Web”,如下所示 。In the Create a Service dialog box, choose Stateless service and call it "Alphabet.Web" as shown below.

    无状态服务屏幕截图

  12. 在 Alphabet.WebApi 服务的 ServiceManifest.xml 中更新终结点信息,以打开端口,如下所示。Update the endpoint information in the ServiceManifest.xml of the Alphabet.WebApi service to open up a port as shown below.

    <Endpoint Name="WebApiServiceEndpoint" Protocol="http" Port="8081"/>
    
  13. 需要在 Web 类中返回 ServiceInstanceListeners 的集合。You need to return a collection of ServiceInstanceListeners in the class Web. 同样,可以选择实现简单 HttpCommunicationListener。Again, you can choose to implement a simple HttpCommunicationListener.

    protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
    {
        return new[] {new ServiceInstanceListener(context => this.CreateInputListener(context))};
    }
    private ICommunicationListener CreateInputListener(ServiceContext context)
    {
        // Service instance's URL is the node's IP & desired port
        EndpointResourceDescription inputEndpoint = context.CodePackageActivationContext.GetEndpoint("WebApiServiceEndpoint")
        string uriPrefix = String.Format("{0}://+:{1}/alphabetpartitions/", inputEndpoint.Protocol, inputEndpoint.Port);
        var uriPublished = uriPrefix.Replace("+", FabricRuntime.GetNodeContext().IPAddressOrFQDN);
        return new HttpCommunicationListener(uriPrefix, uriPublished, this.ProcessInputRequest);
    }
    
  14. 现在需要实现处理逻辑。Now you need to implement the processing logic. HttpCommunicationListener 在请求进入时调用 ProcessInputRequestThe HttpCommunicationListener calls ProcessInputRequest when a request comes in. 让我们继续操作,添加下面的代码。So let's go ahead and add the code below.

    private async Task ProcessInputRequest(HttpListenerContext context, CancellationToken cancelRequest)
    {
        String output = null;
        try
        {
            string lastname = context.Request.QueryString["lastname"];
            char firstLetterOfLastName = lastname.First();
            ServicePartitionKey partitionKey = new ServicePartitionKey(Char.ToUpper(firstLetterOfLastName) - 'A');
    
            ResolvedServicePartition partition = await this.servicePartitionResolver.ResolveAsync(alphabetServiceUri, partitionKey, cancelRequest);
            ResolvedServiceEndpoint ep = partition.GetEndpoint();
    
            JObject addresses = JObject.Parse(ep.Address);
            string primaryReplicaAddress = (string)addresses["Endpoints"].First();
    
            UriBuilder primaryReplicaUriBuilder = new UriBuilder(primaryReplicaAddress);
            primaryReplicaUriBuilder.Query = "lastname=" + lastname;
    
            string result = await this.httpClient.GetStringAsync(primaryReplicaUriBuilder.Uri);
    
            output = String.Format(
                    "Result: {0}. <p>Partition key: '{1}' generated from the first letter '{2}' of input value '{3}'. <br />Processing service partition ID: {4}. <br />Processing service replica address: {5}",
                    result,
                    partitionKey,
                    firstLetterOfLastName,
                    lastname,
                    partition.Info.Id,
                    primaryReplicaAddress);
        }
        catch (Exception ex) { output = ex.Message; }
    
        using (var response = context.Response)
        {
            if (output != null)
            {
                output = output + "added to Partition: " + primaryReplicaAddress;
                byte[] outBytes = Encoding.UTF8.GetBytes(output);
                response.OutputStream.Write(outBytes, 0, outBytes.Length);
            }
        }
    }
    

    让我们逐步演练其步骤。Let's walk through it step by step. 此代码将查询字符串参数 lastname 的第一个字母读入一个字符中。The code reads the first letter of the query string parameter lastname into a char. 随后,从姓氏第一个字母的十六进制值减去 A 的十六进制值,以确定此字母的分区键。Then, it determines the partition key for this letter by subtracting the hexadecimal value of A from the hexadecimal value of the last names' first letter.

    string lastname = context.Request.QueryString["lastname"];
    char firstLetterOfLastName = lastname.First();
    ServicePartitionKey partitionKey = new ServicePartitionKey(Char.ToUpper(firstLetterOfLastName) - 'A');
    

    请记住,对于此示例,我们在使用 26 个分区,其中每个分区有一个分区键。Remember, for this example, we are using 26 partitions with one partition key per partition. 接下来,通过对 servicePartitionResolver 对象使用 ResolveAsync 方法,获取此键的服务分区 partitionNext, we obtain the service partition partition for this key by using the ResolveAsync method on the servicePartitionResolver object. servicePartitionResolver 定义为servicePartitionResolver is defined as

    private readonly ServicePartitionResolver servicePartitionResolver = ServicePartitionResolver.GetDefault();
    

    ResolveAsync 方法采用服务 URI、分区键和取消标记作为参数。The ResolveAsync method takes the service URI, the partition key, and a cancellation token as parameters. 处理服务的服务 URI 为 fabric:/AlphabetPartitions/ProcessingThe service URI for the processing service is fabric:/AlphabetPartitions/Processing. 接下来,我们会获取分区的终结点。Next, we get the endpoint of the partition.

    ResolvedServiceEndpoint ep = partition.GetEndpoint()
    

    最后,我们会生成终结点 URL 以及查询字符串,并调用处理服务。Finally, we build the endpoint URL plus the querystring and call the processing service.

    JObject addresses = JObject.Parse(ep.Address);
    string primaryReplicaAddress = (string)addresses["Endpoints"].First();
    
    UriBuilder primaryReplicaUriBuilder = new UriBuilder(primaryReplicaAddress);
    primaryReplicaUriBuilder.Query = "lastname=" + lastname;
    
    string result = await this.httpClient.GetStringAsync(primaryReplicaUriBuilder.Uri);
    

    处理完成之后,我们会将输出写回。Once the processing is done, we write the output back.

  15. 最后一步是测试服务。The last step is to test the service. Visual Studio 将应用程序参数用于本地和云部署。Visual Studio uses application parameters for local and cloud deployment. 要在本地测试具有 26 个分区的服务,需要在 AlphabetPartitions 项目的 ApplicationParameters 文件夹中更新 Local.xml 文件,如下所示:To test the service with 26 partitions locally, you need to update the Local.xml file in the ApplicationParameters folder of the AlphabetPartitions project as shown below:

    <Parameters>
      <Parameter Name="Processing_PartitionCount" Value="26" />
      <Parameter Name="WebApi_InstanceCount" Value="1" />
    </Parameters>
    
  16. 完成部署之后,可以在 Service Fabric Explorer 中检查服务及其所有分区。Once you finish deployment, you can check the service and all of its partitions in the Service Fabric Explorer.

    Service Fabric Explorer 屏幕截图

  17. 在浏览器中,可以输入 http://localhost:8081/?lastname=somename来测试分区逻辑。In a browser, you can test the partitioning logic by entering http://localhost:8081/?lastname=somename. 会看到以相同字母开头的每个姓氏都存储在相同分区中。You will see that each last name that starts with the same letter is being stored in the same partition.

    浏览器屏幕截图

该示例的完整源代码位于 GitHubThe entire source code of the sample is available on GitHub.

后续步骤Next steps

有关 Service Fabric 概念的信息,请参阅以下内容:For information on Service Fabric concepts, see the following: