借助 Istio 在 Azure Kubernetes 服务 (AKS) 中使用智能路由和 Canary 发布Use intelligent routing and canary releases with Istio in Azure Kubernetes Service (AKS)
Istio 是跨 Kubernetes 群集中的微服务提供关键功能集的开源服务网格。Istio is an open-source service mesh that provides a key set of functionality across the microservices in a Kubernetes cluster. 这些功能包括流量管理、服务标识和安全性、策略执行以及可观察性。These features include traffic management, service identity and security, policy enforcement, and observability. 有关 Istio 的详细信息,请参阅官方文档什么是 Istio?。For more information about Istio, see the official What is Istio? documentation.
本文演示了如何使用 Istio 的流量管理功能。This article shows you how to use the traffic management functionality of Istio. 将使用 AKS 投票应用示例探究智能路由和 Canary 发布。A sample AKS voting app is used to explore intelligent routing and canary releases.
在本文中,学习如何:In this article, you learn how to:
- 部署应用程序Deploy the application
- 更新应用程序Update the application
- 推出应用程序的 Canary 发布Roll out a canary release of the application
- 完成推出Finalize the rollout
开始之前Before you begin
备注
此方案已针对 Istio 版本 1.3.2
进行了测试。This scenario has been tested against Istio version 1.3.2
.
本文中详述的步骤假设你已创建 AKS 群集(已启用 Kubernetes RBAC 的 Kubernetes 1.13
及更高版本)并已与该群集建立 kubectl
连接。The steps detailed in this article assume you've created an AKS cluster (Kubernetes 1.13
and above, with Kubernetes RBAC enabled) and have established a kubectl
connection with the cluster. 此外,还需在群集内安装 Istio。You'll also need Istio installed in your cluster.
如果获取关于这些项目的帮助,请参阅 AKS 快速入门和在 AKS 中安装 Istio 的指导。If you need help with any of these items, then see the AKS quickstart and Install Istio in AKS guidance.
关于此应用程序方案About this application scenario
AKS 投票应用示例向用户提供了两个投票选项(“猫”或“狗”) 。The sample AKS voting app provides two voting options (Cats or Dogs) to users. 有一个用于保存各个选项的投票数的存储组件。There is a storage component that persists the number of votes for each option. 此外,还有一个用于提供各个选项的投票详细信息的分析组件。Additionally, there is an analytics component that provides details around the votes cast for each option.
在此应用程序方案中,首先需要部署投票应用的 1.0
版本及分析组件的 1.0
版本。In this application scenario, you start by deploying version 1.0
of the voting app and version 1.0
of the analytics component. 此分析组件将对投票数进行简单计数。The analytics component provides simple counts for the number of votes. 投票应用和分析组件与由 Redis 支持的存储组件 1.0
版本进行交互。The voting app and analytics component interact with version 1.0
of the storage component, which is backed by Redis.
将分析组件升级到提供计数和当前总数及百分比的 1.1
版本。You upgrade the analytics component to version 1.1
, which provides counts, and now totals and percentages.
一部分用户将通过 Canary 发布测试应用的 2.0
版本。A subset of users test version 2.0
of the app via a canary release. 该新版本使用 MySQL 数据库支持的存储组件。This new version uses a storage component that is backed by a MySQL database.
确信该 2.0
版本按预期方式作用于该部分用户后,向所有用户推出 2.0
版本。Once you're confident that version 2.0
works as expected on your subset of users, you roll out version 2.0
to all your users.
部署应用程序Deploy the application
首先将应用程序部署到 Azure Kubernetes 服务 (AKS) 群集。Let's start by deploying the application into your Azure Kubernetes Service (AKS) cluster. 下图说明了本部分结束时运行的内容 - 所有组件的 1.0
版本及由 Istio Ingress 网关维护的入站请求:The following diagram shows what runs by the end of this section - version 1.0
of all components with inbound requests serviced via the Istio ingress gateway:
按照本文操作所需的项目可在 Azure-Samples/aks-voting-app GitHub 存储库中获取。The artifacts you need to follow along with this article are available in the Azure-Samples/aks-voting-app GitHub repo. 下载这些项目或克隆该存储库,如下所示:You can either download the artifacts or clone the repo as follows:
git clone https://github.com/Azure-Samples/aks-voting-app.git
转到下载/克隆的存储库中的以下文件夹中,并从此文件夹中运行所有后续步骤:Change to the following folder in the downloaded / cloned repo and run all subsequent steps from this folder:
cd aks-voting-app/scenarios/intelligent-routing-with-istio
首先,在 AKS 群集中为 AKS 投票应用示例创建命名空间,并命名为“voting
”,如下所示:First, create a namespace in your AKS cluster for the sample AKS voting app named voting
as follows:
kubectl create namespace voting
使用 istio-injection=enabled
标记此命名空间。Label the namespace with istio-injection=enabled
. 此标签会指示 Istio 自动将 Istio 代理作为挎斗注入到此命名空间中的所有 Pod 中。This label instructs Istio to automatically inject the istio-proxies as sidecars into all of your pods in this namespace.
kubectl label namespace voting istio-injection=enabled
现在我们将创建 AKS 投票应用的组件。Now let's create the components for the AKS Voting app. 在上一个步骤所创建的 voting
命名空间中创建这些组件。Create these components in the voting
namespace created in a previous step.
kubectl apply -f kubernetes/step-1-create-voting-app.yaml --namespace voting
下面的示例输出显示了要创建的资源:The following example output shows the resources being created:
deployment.apps/voting-storage-1-0 created
service/voting-storage created
deployment.apps/voting-analytics-1-0 created
service/voting-analytics created
deployment.apps/voting-app-1-0 created
service/voting-app created
备注
Istio 对 Pod 和服务有一些特定要求。Istio has some specific requirements around pods and services. 有关详细信息,请参阅针对 Pod 和服务的 Istio 要求文档。For more information, see the Istio Requirements for Pods and Services documentation.
使用 kubectl get pods 命令查看已创建的 Pod,如下所示:To see the pods that have been created, use the kubectl get pods command as follows:
kubectl get pods -n voting --show-labels
下面的示例输出说明有三个 voting-app
Pod 实例,并且 voting-analytics
和 voting-storage
Pod 实例各有一个。The following example output shows there are three instances of the voting-app
pod and a single instance of both the voting-analytics
and voting-storage
pods. 每个 Pod 有两个容器。Each of the pods has two containers. 两个容器中的其中一个是组件,另一个是 istio-proxy
:One of these containers is the component, and the other is the istio-proxy
:
NAME READY STATUS RESTARTS AGE LABELS
voting-analytics-1-0-57c7fccb44-ng7dl 2/2 Running 0 39s app=voting-analytics,pod-template-hash=57c7fccb44,version=1.0
voting-app-1-0-956756fd-d5w7z 2/2 Running 0 39s app=voting-app,pod-template-hash=956756fd,version=1.0
voting-app-1-0-956756fd-f6h69 2/2 Running 0 39s app=voting-app,pod-template-hash=956756fd,version=1.0
voting-app-1-0-956756fd-wsxvt 2/2 Running 0 39s app=voting-app,pod-template-hash=956756fd,version=1.0
voting-storage-1-0-5d8fcc89c4-2jhms 2/2 Running 0 39s app=voting-storage,pod-template-hash=5d8fcc89c4,version=1.0
若要查看有关 Pod 的信息,我们将使用带有标签选择器的 kubectl describe pod 命令来选择 voting-analytics
Pod。To see information about the pod, we'll use the kubectl describe pod command with label selectors to select the voting-analytics
pod. 我们将筛选输出以显示 Pod 中两个容器的详细信息:We'll filter the output to show the details of the two containers present in the pod:
kubectl describe pod -l "app=voting-analytics, version=1.0" -n voting | egrep "istio-proxy:|voting-analytics:" -A2
Istio 已自动注入 istio-proxy
容器,来管理组件的往返网络流量,如以下示例输出所示:The istio-proxy
container has automatically been injected by Istio to manage the network traffic to and from your components, as shown in the following example output:
voting-analytics:
Container ID: docker://35efa1f31d95ca737ff2e2229ab8fe7d9f2f8a39ac11366008f31287be4cea4d
Image: mcr.microsoft.com/aks/samples/voting/analytics:1.0
--
istio-proxy:
Container ID: docker://1fa4eb43e8d4f375058c23cc062084f91c0863015e58eb377276b20c809d43c6
Image: docker.io/istio/proxyv2:1.3.2
kubectl describe pod -l "app=voting-analytics, version=1.0" -n voting | egrep "istio-proxy:|voting-analytics:" -A2
Istio 已自动注入 istio-proxy
容器,来管理组件的往返网络流量,如以下示例输出所示:The istio-proxy
container has automatically been injected by Istio to manage the network traffic to and from your components, as shown in the following example output:
voting-analytics:
Container ID: docker://35efa1f31d95ca737ff2e2229ab8fe7d9f2f8a39ac11366008f31287be4cea4d
Image: mcr.microsoft.com/aks/samples/voting/analytics:1.0
--
istio-proxy:
Container ID: docker://1fa4eb43e8d4f375058c23cc062084f91c0863015e58eb377276b20c809d43c6
Image: docker.io/istio/proxyv2:1.3.2
kubectl describe pod -l "app=voting-analytics, version=1.0" -n voting | Select-String -Pattern "istio-proxy:|voting-analytics:" -Context 0,2
Istio 已自动注入 istio-proxy
容器,来管理组件的往返网络流量,如以下示例输出所示:The istio-proxy
container has automatically been injected by Istio to manage the network traffic to and from your components, as shown in the following example output:
> voting-analytics:
Container ID: docker://35efa1f31d95ca737ff2e2229ab8fe7d9f2f8a39ac11366008f31287be4cea4d
Image: mcr.microsoft.com/aks/samples/voting/analytics:1.0
> istio-proxy:
Container ID: docker://1fa4eb43e8d4f375058c23cc062084f91c0863015e58eb377276b20c809d43c6
Image: docker.io/istio/proxyv2:1.3.2
创建 Istio 网关和虚拟服务后,才能连接到投票应用。You can't connect to the voting app until you create the Istio Gateway and Virtual Service. 这些 Istio 资源将来自默认 Istio Ingress 网关的流量路由到应用程序。These Istio resources route traffic from the default Istio ingress gateway to our application.
备注
网关是位于服务网格边缘的组件,用于接收入站或出站 HTTP 和 TCP 流量。A Gateway is a component at the edge of the service mesh that receives inbound or outbound HTTP and TCP traffic.
虚拟服务将定义一组适用于一个或多个目标服务的路由规则。A Virtual Service defines a set of routing rules for one or more destination services.
使用 kubectl apply
命令部署网关和虚拟服务 yaml。Use the kubectl apply
command to deploy the Gateway and Virtual Service yaml. 请记得指定这些资源要部署到的命名空间。Remember to specify the namespace that these resources are deployed into.
kubectl apply -f istio/step-1-create-voting-app-gateway.yaml --namespace voting
以下示例输出显示了正在创建的新网关和虚拟服务:The following example output shows the new Gateway and Virtual Service being created:
virtualservice.networking.istio.io/voting-app created
gateway.networking.istio.io/voting-app-gateway created
使用下面的命令获取 Istio Ingress 网关的 IP 地址:Obtain the IP address of the Istio Ingress Gateway using the following command:
kubectl get service istio-ingressgateway --namespace istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
以下示例输出显示 Ingress 网关的 IP 地址:The following example output shows the IP address of the Ingress Gateway:
20.188.211.19
打开浏览器,并粘贴该 IP 地址。Open up a browser and paste in the IP address. 将显示 AKS 投票应用示例。The sample AKS voting app is displayed.
屏幕底部的信息显示应用使用 voting-app
的 1.0
版本,并使用 voting-storage
(Redis) 的 1.0
版本作为存储选项。The information at the bottom of the screen shows that the app uses version 1.0
of voting-app
and version 1.0
of voting-storage
(Redis).
更新应用程序Update the application
让我们来部署分析组件的新版本。Let's deploy a new version of the analytics component. 除每个类别的计数外,新版本 1.1
还显示总数和百分比。This new version 1.1
displays totals and percentages in addition to the count for each category.
下图显示了将在本部分结束时运行的内容 - 仅 voting-analytics
组件的 1.1
版本具有路由自 voting-app
组件的流量。The following diagram shows what will be running at the end of this section - only version 1.1
of our voting-analytics
component has traffic routed from the voting-app
component. 即使 voting-analytics
组件的 1.0
版本继续运行,并且 voting-analytics
服务引用了它,Istio 代理也禁用它的往返流量。Even though version 1.0
of our voting-analytics
component continues to run and is referenced by the voting-analytics
service, the Istio proxies disallow traffic to and from it.
让我们部署 voting-analytics
组件的 1.1
版本。Let's deploy version 1.1
of the voting-analytics
component. 在 voting
命名空间中创建此组件:Create this component in the voting
namespace:
kubectl apply -f kubernetes/step-2-update-voting-analytics-to-1.1.yaml --namespace voting
下面的示例输出显示了要创建的资源:The following example output shows the resources being created:
deployment.apps/voting-analytics-1-1 created
使用在前面步骤中获取的 Istio Ingress 网关的 IP 地址,重新在浏览器中打开 AKS 投票应用示例。Open the sample AKS voting app in a browser again, using the IP address of the Istio Ingress Gateway obtained in the previous step.
浏览器在下面所示的两个视图之间交替。Your browser alternates between the two views shown below. 由于将 Kubernetes 服务用于了 voting-analytics
组件,并且仅有单个标签选择器 (app: voting-analytics
),因此 Kubernetes 在匹配该选择器的 Pod 之间使用轮询机制的默认行为。Since you are using a Kubernetes Service for the voting-analytics
component with only a single label selector (app: voting-analytics
), Kubernetes uses the default behavior of round-robin between the pods that match that selector. 在这种情况下,它同时是 voting-analytics
Pod 的 1.0
和 1.1
版本。In this case, it is both version 1.0
and 1.1
of your voting-analytics
pods.
可以将 voting-analytics
组件的两个版本之间的切换可视化,如下所示。You can visualize the switching between the two versions of the voting-analytics
component as follows. 要记得使用自己的 Istio Ingress 网关的 IP 地址。Remember to use the IP address of your own Istio Ingress Gateway.
INGRESS_IP=20.188.211.19
for i in {1..5}; do curl -si $INGRESS_IP | grep results; done
INGRESS_IP=20.188.211.19
for i in {1..5}; do curl -si $INGRESS_IP | grep results; done
$INGRESS_IP="20.188.211.19"
(1..5) |% { (Invoke-WebRequest -Uri $INGRESS_IP).Content.Split("`n") | Select-String -Pattern "results" }
下面的示例输出显示了网站在两个版本之间切换时返回的网站的相关部分:The following example output shows the relevant part of the returned web site as the site switches between versions:
<div id="results"> Cats: 2 | Dogs: 4 </div>
<div id="results"> Cats: 2 | Dogs: 4 </div>
<div id="results"> Cats: 2/6 (33%) | Dogs: 4/6 (67%) </div>
<div id="results"> Cats: 2 | Dogs: 4 </div>
<div id="results"> Cats: 2/6 (33%) | Dogs: 4/6 (67%) </div>
将流量锁定到应用程序 1.1 版本Lock down traffic to version 1.1 of the application
现在让我们将流量锁定到 voting-analytics
组件的 1.1
版本和 voting-storage
组件的 1.0
版本。Now let's lock down traffic to only version 1.1
of the voting-analytics
component and to version 1.0
of the voting-storage
component. 然后定义适用于所有其他组件的路由规则。You then define routing rules for all of the other components.
- 虚拟服务将定义一组适用于一个或多个目标服务的路由规则。A Virtual Service defines a set of routing rules for one or more destination services.
- “目标规则”定义流量策略和特定于版本的策略。A Destination Rule defines traffic policies and version specific policies.
- “策略”定义工作负载可以接受的身份验证方法。A Policy defines what authentication methods can be accepted on workload(s).
使用 kubectl apply
命令替换 voting-app
上的虚拟服务定义,并添加适用于其他组件的目标规则和虚拟服务。Use the kubectl apply
command to replace the Virtual Service definition on your voting-app
and add Destination Rules and Virtual Services for the other components. 此外,还将策略添加到 voting
命名空间,以确保使用相互 TLS 和客户端证书保护服务之间的所有通信。You will add a Policy to the voting
namespace to ensure that all communicate between services is secured using mutual TLS and client certificates.
- 策略将
peers.mtls.mode
设置为STRICT
,以确保在voting
命名空间内的服务之间执行相互 TLS。The Policy haspeers.mtls.mode
set toSTRICT
to ensure that mutual TLS is enforced between your services within thevoting
namespace. - 我们还在所有目标规则中将
trafficPolicy.tls.mode
设置为ISTIO_MUTUAL
。We also set thetrafficPolicy.tls.mode
toISTIO_MUTUAL
in all our Destination Rules. Istio 为服务提供强标识,并使用相互 TLS 和客户端证书保护 Istio 以透明方式管理的服务之间的通信。Istio provides services with strong identities and secures communications between services using mutual TLS and client certificates that Istio transparently manages.
kubectl apply -f istio/step-2-update-and-add-routing-for-all-components.yaml --namespace voting
以下示例输出显示了正在更新/创建的新策略、目标规则和虚拟服务:The following example output shows the new Policy, Destination Rules, and Virtual Services being updated/created:
virtualservice.networking.istio.io/voting-app configured
policy.authentication.istio.io/default created
destinationrule.networking.istio.io/voting-app created
destinationrule.networking.istio.io/voting-analytics created
virtualservice.networking.istio.io/voting-analytics created
destinationrule.networking.istio.io/voting-storage created
virtualservice.networking.istio.io/voting-storage created
若重新在浏览器中打开 AKS 投票应用,则 voting-app
组件仅使用 voting-analytics
组件的新版本 1.1
。If you open the AKS Voting app in a browser again, only the new version 1.1
of the voting-analytics
component is used by the voting-app
component.
现在路由到 voting-analytics
组件的 1.1
版本,可以更加轻松地将此可视化,如下所示。You can visualize that you are now only routed to version 1.1
of your voting-analytics
component as follows. 要记得使用自己的 Istio Ingress 网关的 IP 地址:Remember to use the IP address of your own Istio Ingress Gateway:
INGRESS_IP=20.188.211.19
for i in {1..5}; do curl -si $INGRESS_IP | grep results; done
INGRESS_IP=20.188.211.19
for i in {1..5}; do curl -si $INGRESS_IP | grep results; done
$INGRESS_IP="20.188.211.19"
(1..5) |% { (Invoke-WebRequest -Uri $INGRESS_IP).Content.Split("`n") | Select-String -Pattern "results" }
下面的示例输出显示了返回的网站的相关部分:The following example output shows the relevant part of the returned web site:
<div id="results"> Cats: 2/6 (33%) | Dogs: 4/6 (67%) </div>
<div id="results"> Cats: 2/6 (33%) | Dogs: 4/6 (67%) </div>
<div id="results"> Cats: 2/6 (33%) | Dogs: 4/6 (67%) </div>
<div id="results"> Cats: 2/6 (33%) | Dogs: 4/6 (67%) </div>
<div id="results"> Cats: 2/6 (33%) | Dogs: 4/6 (67%) </div>
现在让我们确认 Istio 要使用相互 TLS 来保护各个服务之间的通信。Let's now confirm that Istio is using mutual TLS to secure communications between each of our services. 为此,我们将对 istioctl
客户端二进制文件使用 authn tls-check 命令,其格式如下。For this we will use the authn tls-check command on the istioctl
client binary, which takes the following form.
istioctl authn tls-check <pod-name[.namespace]> [<service>]
这组命令提供有关从命名空间中的所有 Pod 访问指定服务的信息,这些 Pod 与一组标签匹配:This set of commands provide information about the access to the specified services, from all pods that are in a namespace and match a set of labels:
# mTLS configuration between each of the istio ingress pods and the voting-app service
kubectl get pod -n istio-system -l app=istio-ingressgateway | grep Running | cut -d ' ' -f1 | xargs -n1 -I{} istioctl authn tls-check {}.istio-system voting-app.voting.svc.cluster.local
# mTLS configuration between each of the voting-app pods and the voting-analytics service
kubectl get pod -n voting -l app=voting-app | grep Running | cut -d ' ' -f1 | xargs -n1 -I{} istioctl authn tls-check {}.voting voting-analytics.voting.svc.cluster.local
# mTLS configuration between each of the voting-app pods and the voting-storage service
kubectl get pod -n voting -l app=voting-app | grep Running | cut -d ' ' -f1 | xargs -n1 -I{} istioctl authn tls-check {}.voting voting-storage.voting.svc.cluster.local
# mTLS configuration between each of the voting-analytics version 1.1 pods and the voting-storage service
kubectl get pod -n voting -l app=voting-analytics,version=1.1 | grep Running | cut -d ' ' -f1 | xargs -n1 -I{} istioctl authn tls-check {}.voting voting-storage.voting.svc.cluster.local
# mTLS configuration between each of the istio ingress pods and the voting-app service
kubectl get pod -n istio-system -l app=istio-ingressgateway | grep Running | cut -d ' ' -f1 | xargs -n1 -I{} istioctl authn tls-check {}.istio-system voting-app.voting.svc.cluster.local
# mTLS configuration between each of the voting-app pods and the voting-analytics service
kubectl get pod -n voting -l app=voting-app | grep Running | cut -d ' ' -f1 | xargs -n1 -I{} istioctl authn tls-check {}.voting voting-analytics.voting.svc.cluster.local
# mTLS configuration between each of the voting-app pods and the voting-storage service
kubectl get pod -n voting -l app=voting-app | grep Running | cut -d ' ' -f1 | xargs -n1 -I{} istioctl authn tls-check {}.voting voting-storage.voting.svc.cluster.local
# mTLS configuration between each of the voting-analytics version 1.1 pods and the voting-storage service
kubectl get pod -n voting -l app=voting-analytics,version=1.1 | grep Running | cut -d ' ' -f1 | xargs -n1 -I{} istioctl authn tls-check {}.voting voting-storage.voting.svc.cluster.local
# mTLS configuration between each of the istio ingress pods and the voting-app service
(kubectl get pod -n istio-system -l app=istio-ingressgateway | Select-String -Pattern "Running").Line |% { $_.Split()[0] |% { istioctl authn tls-check $($_ + ".istio-system") voting-app.voting.svc.cluster.local } }
# mTLS configuration between each of the voting-app pods and the voting-analytics service
(kubectl get pod -n voting -l app=voting-app | Select-String -Pattern "Running").Line |% { $_.Split()[0] |% { istioctl authn tls-check $($_ + ".voting") voting-analytics.voting.svc.cluster.local } }
# mTLS configuration between each of the voting-app pods and the voting-storage service
(kubectl get pod -n voting -l app=voting-app | Select-String -Pattern "Running").Line |% { $_.Split()[0] |% { istioctl authn tls-check $($_ + ".voting") voting-storage.voting.svc.cluster.local } }
# mTLS configuration between each of the voting-analytics version 1.1 pods and the voting-storage service
(kubectl get pod -n voting -l app=voting-analytics,version=1.1 | Select-String -Pattern "Running").Line |% { $_.Split()[0] |% { istioctl authn tls-check $($_ + ".voting") voting-storage.voting.svc.cluster.local } }
下面的示例输出显示,对上面的各个查询都强制执行相互 TLS。This following example output shows that mutual TLS is enforced for each of our queries above. 输出还显示了实施相互 TLS 的策略和目标规则:The output also shows the Policy and Destination Rules that enforces the mutual TLS:
# mTLS configuration between istio ingress pods and the voting-app service
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
voting-app.voting.svc.cluster.local:8080 OK mTLS mTLS default/voting voting-app/voting
# mTLS configuration between each of the voting-app pods and the voting-analytics service
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
voting-analytics.voting.svc.cluster.local:8080 OK mTLS mTLS default/voting voting-analytics/voting
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
voting-analytics.voting.svc.cluster.local:8080 OK mTLS mTLS default/voting voting-analytics/voting
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
voting-analytics.voting.svc.cluster.local:8080 OK mTLS mTLS default/voting voting-analytics/voting
# mTLS configuration between each of the voting-app pods and the voting-storage service
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
voting-storage.voting.svc.cluster.local:6379 OK mTLS mTLS default/voting voting-storage/voting
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
voting-storage.voting.svc.cluster.local:6379 OK mTLS mTLS default/voting voting-storage/voting
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
voting-storage.voting.svc.cluster.local:6379 OK mTLS mTLS default/voting voting-storage/voting
# mTLS configuration between each of the voting-analytics version 1.1 pods and the voting-storage service
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
voting-storage.voting.svc.cluster.local:6379 OK mTLS mTLS default/voting voting-storage/voting
推出应用程序的 Canary 发布Roll out a canary release of the application
现在让我们部署 voting-app
、voting-analytics
和 voting-storage
组件的版本 2.0
。Now let's deploy a new version 2.0
of the voting-app
, voting-analytics
, and voting-storage
components. 新的 voting-storage
组件使用 MySQL 而不是 Redis,并且 voting-app
和 voting-analytics
组件被更新以允许它们使用这个新的 voting-storage
组件。The new voting-storage
component use MySQL instead of Redis, and the voting-app
and voting-analytics
components are updated to allow them to use this new voting-storage
component.
现在 voting-app
组件支持功能标记功能。The voting-app
component now supports feature flag functionality. 可使用此功能标记针对用户子集测试 Istio 的 Canary 发布功能。This feature flag allows you to test the canary release capability of Istio for a subset of users.
下图显示了在本部分结束时将要运行的内容。The following diagram shows what you will have running at the end of this section.
voting-app
组件的1.0
版本、voting-analytics
组件的1.1
版本和voting-storage
组件的1.0
版本能够相互通信。Version1.0
of thevoting-app
component, version1.1
of thevoting-analytics
component and version1.0
of thevoting-storage
component are able to communicate with each other.voting-app
组件的2.0
版本、voting-analytics
组件的2.0
版本和voting-storage
组件的2.0
版本能够相互通信。Version2.0
of thevoting-app
component, version2.0
of thevoting-analytics
component and version2.0
of thevoting-storage
component are able to communicate with each other.- 仅已设置特定功能标记的用户可以访问用户
voting-app
组件的2.0
版本。Version2.0
of thevoting-app
component are only accessible to users that have a specific feature flag set. 此更改通过 cookie 使用功能标记管理。This change is managed using a feature flag via a cookie.
首先,更新 Istio 目标规则和虚拟服务,以适用于这些新组件。First, update the Istio Destination Rules and Virtual Services to cater for these new components. 这些更新将确保不会以错误的方式将流量路由到新组件,并确保用户不会获得不需要的访问:These updates ensure that you don't route traffic incorrectly to the new components and users don't get unexpected access:
kubectl apply -f istio/step-3-add-routing-for-2.0-components.yaml --namespace voting
以下示例输出显示了正在更新的目标规则和虚拟服务:The following example output shows the Destination Rules and Virtual Services being updated:
destinationrule.networking.istio.io/voting-app configured
virtualservice.networking.istio.io/voting-app configured
destinationrule.networking.istio.io/voting-analytics configured
virtualservice.networking.istio.io/voting-analytics configured
destinationrule.networking.istio.io/voting-storage configured
virtualservice.networking.istio.io/voting-storage configured
接下来,为组件的新版本 2.0
添加 Kubernetes 对象。Next, let's add the Kubernetes objects for the new version 2.0
components. 此外,更新 voting-storage
服务,以将 MySQL 的 3306
端口包括在内:You also update the voting-storage
service to include the 3306
port for MySQL:
kubectl apply -f kubernetes/step-3-update-voting-app-with-new-storage.yaml --namespace voting
下面的示例输出说明已成功更新或创建 Kubernetes 对象:The following example output shows the Kubernetes objects are successfully updated or created:
service/voting-storage configured
secret/voting-storage-secret created
deployment.apps/voting-storage-2-0 created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/voting-analytics-2-0 created
deployment.apps/voting-app-2-0 created
等待至所有 2.0
版本的 Pod 均在运行。Wait until all the version 2.0
pods are running. 使用带有 -w
监视开关的 kubectl get pods 命令监视 voting
命名空间中所有 Pod 上的更改:Use the kubectl get pods command with the -w
watch switch to watch for changes on all pods in the voting
namespace:
kubectl get pods --namespace voting -w
现在应该能够在投票应用程序的 1.0
版本和 2.0
版本 (Canary) 之间切换。You should now be able to switch between the version 1.0
and version 2.0
(canary) of the voting application. 屏幕底部的功能标记切换将设置 cookie。The feature flag toggle at the bottom of the screen sets a cookie. voting-app
虚拟服务使用此 cookie 将用户路由到新版本 2.0
。This cookie is used by the voting-app
Virtual Service to route users to the new version 2.0
.
应用的各个版本的投票计数不同。The vote counts are different between the versions of the app. 此差异突出了正在使用两个不同的存储后端。This difference highlights that you are using two different storage backends.
完成推出Finalize the rollout
成功测试 Canary 发布后,更新 voting-app
虚拟服务,以将所有流量路由到 voting-app
组件的 2.0
版本。Once you've successfully tested the canary release, update the voting-app
Virtual Service to route all traffic to version 2.0
of the voting-app
component. 然后无论是否设置了功能标记,所有用户均会看到应用程序的 2.0
版本:All users then see version 2.0
of the application, regardless of whether the feature flag is set or not:
更新所有的目标规则,以删除不想要激活的组件版本。Update all the Destination Rules to remove the versions of the components you no longer want active. 然后更新所有虚拟服务,以停止引用这些版本。Then, update all the Virtual Services to stop referencing those versions.
由于不再有任何流量到达这些组件的任何旧版本,因此现在可以安全地删除这些组件的所有部署。Since there's no longer any traffic to any of the older versions of the components, you can now safely delete all the deployments for those components.
现在你已成功推出 AKS 投票应用的新版本。You have now successfully rolled out a new version of the AKS Voting App.
清除Clean up
通过删除 voting
命名空间,可以从 AKS 群集中删除我们在此方案中使用的 AKS 投票应用,如下所示:You can remove the AKS voting app we used in this scenario from your AKS cluster by deleting the voting
namespace as follows:
kubectl delete namespace voting
以下示例输出显示,AKS 投票应用的所有组件都已从 AKS 群集中删除。The following example output shows that all the components of the AKS voting app have been removed from your AKS cluster.
namespace "voting" deleted
后续步骤Next steps
可以使用 Istio Bookinfo 应用程序示例探究其他方案。You can explore additional scenarios using the Istio Bookinfo Application example.