为 Azure HDInsight 中的 Apache Kafka 设置安全套接字层 (SSL) 加密和身份验证Setup Secure Sockets Layer (SSL) encryption and authentication for Apache Kafka in Azure HDInsight

本文介绍如何在 Apache Kafka 客户端与 Apache Kafka 代理之间设置 SSL 加密。This article shows you how to set up SSL encryption between Apache Kafka clients and Apache Kafka brokers. 此外还介绍了如何设置客户端的身份验证(有时称为双向 SSL)。It also shows you how to set up authentication of clients (sometimes referred to as two-way SSL).

重要

有两个客户端可用于 Kafka 应用程序:一个 Java 客户端和一个控制台客户端。There are two clients which you can use for Kafka applications: a Java client and a console client. 只有 Java 客户端 ProducerConsumer.java 可以通过 SSL 来实现生成和使用。Only the Java client ProducerConsumer.java can use SSL for both producing and consuming. 控制台生成方客户端 console-producer.sh 不能与 SSL 配合工作。The console producer client console-producer.sh does not work with SSL.

备注

版本 1.1 的 HDInsight Kafka 控制台生成方不支持 SSL。HDInsight Kafka console producer with version 1.1 does not support SSL.

Apache Kafka 代理设置Apache Kafka broker setup

Kafka SSL 代理设置按以下方式使用四个 HDInsight 群集 VM:The Kafka SSL broker setup will use four HDInsight cluster VMs in the following way:

  • 头节点 0 - 证书颁发机构 (CA)headnode 0 - Certificate Authority (CA)
  • 工作器节点 0、1 和 2 - 代理worker node 0, 1, and 2 - brokers

备注

本指南使用自签名证书,但最安全的解决方案是使用受信任 CA 颁发的证书。This guide will use self-signed certificates, but the most secure solution is to use certificates issued by trusted CAs.

代理设置过程的摘要如下:The summary of the broker setup process is as follows:

  1. 在每个工作器节点(共三个)上重复以下步骤:The following steps are repeated on each of the three worker nodes:

    1. 生成证书。Generate a certificate.
    2. 创建证书签名请求。Create a cert signing request.
    3. 将证书签名请求发送到证书颁发机构 (CA)。Send the cert signing request to the Certificate Authority (CA).
    4. 登录到 CA 并为请求签名。Sign in to the CA and sign the request.
    5. 通过 SCP 将签名的证书发回给工作器节点。SCP the signed certificate back to the worker node.
    6. 通过 SCP 将 CA 的公共证书发送给工作器节点。SCP the public certificate of the CA to the worker node.
  2. 获取所有证书后,将证书放入证书存储。Once you have all of the certificates, put the certs into the cert store.

  3. 转到 Ambari 并更改配置。Go to Ambari and change the configurations.

遵照以下详细说明完成代理设置:Use the following detailed instructions to complete the broker setup:

重要

在以下代码片段中,wnX 是三个工作器节点中某个节点的缩写,应该相应地将其替换为 wn0wn1wn2In the following code snippets wnX is an abbreviation for one of the three worker nodes and should be substituted with wn0, wn1 or wn2 as appropriate. 应将 WorkerNode0_NameHeadNode0_Name 替换为相应计算机的名称。WorkerNode0_Name and HeadNode0_Name should be substituted with the names of the respective machines.

  1. 在头节点 0 上执行初始设置,使 HDInsight 填充证书颁发机构 (CA) 的角色。Perform initial setup on head node 0, which for HDInsight will fill the role of the Certificate Authority (CA).

    # Create a new directory 'ssl' and change into it
    mkdir ssl
    cd ssl
    
  2. 在每个代理(工作器节点 0、1 和 2)上执行相同的初始设置。Perform the same initial setup on each of the brokers (worker nodes 0, 1 and 2).

    # Create a new directory 'ssl' and change into it
    mkdir ssl
    cd ssl
    
  3. 在每个工作器节点上,使用以下代码片段执行以下步骤。On each of the worker nodes, execute the following steps using the code snippet below.

    1. 创建密钥存储并在其中填充新的专用证书。Create a keystore and populate it with a new private certificate.
    2. 创建证书签名请求。Create a certificate signing request.
    3. 通过 SCP 将证书签名请求发送给 CA(头节点 0)SCP the certificate signing request to the CA (headnode0)
    keytool -genkey -keystore kafka.server.keystore.jks -validity 365 -storepass "MyServerPassword123" -keypass "MyServerPassword123" -dname "CN=FQDN_WORKER_NODE" -storetype pkcs12
    keytool -keystore kafka.server.keystore.jks -certreq -file cert-file -storepass "MyServerPassword123" -keypass "MyServerPassword123"
    scp cert-file sshuser@HeadNode0_Name:~/ssl/wnX-cert-sign-request
    
  4. 在 CA 计算机上,运行以下命令创建 ca-cert 和 ca-key 文件:On the CA machine run the following command to create ca-cert and ca-key files:

    openssl req -new -newkey rsa:4096 -days 365 -x509 -subj "/CN=Kafka-Security-CA" -keyout ca-key -out ca-cert -nodes
    
  5. 切换到 CA 计算机,为所有收到的证书签名请求签名:Change to the CA machine and sign all of the received cert signing requests:

    openssl x509 -req -CA ca-cert -CAkey ca-key -in wn0-cert-sign-request -out wn0-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
    openssl x509 -req -CA ca-cert -CAkey ca-key -in wn1-cert-sign-request -out wn1-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
    openssl x509 -req -CA ca-cert -CAkey ca-key -in wn2-cert-sign-request -out wn2-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
    
  6. 将已签名的证书从 CA(头节点 0)发回给工作器节点。Send the signed certificates back to the worker nodes from the CA (headnode0).

    scp wn0-cert-signed sshuser@WorkerNode0_Name:~/ssl/cert-signed
    scp wn1-cert-signed sshuser@WorkerNode1_Name:~/ssl/cert-signed
    scp wn2-cert-signed sshuser@WorkerNode2_Name:~/ssl/cert-signed
    
  7. 将 CA 的公共证书发送给每个工作器节点。Send the public certificate of the CA to each worker node.

    scp ca-cert sshuser@WorkerNode0_Name:~/ssl/ca-cert
    scp ca-cert sshuser@WorkerNode1_Name:~/ssl/ca-cert
    scp ca-cert sshuser@WorkerNode2_Name:~/ssl/ca-cert
    
  8. 在每个工作器节点上,将 CA 公共证书添加到信任存储和密钥存储。On each worker node, add the CAs public certificate to the truststore and keystore. 然后,将工作器节点自身的已签名证书添加到密钥存储Then add the worker node's own signed certificate to the keystore

    keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
    keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
    keytool -keystore kafka.server.keystore.jks -import -file cert-signed -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
    
    

将 Kafka 配置更新为使用 SSL 并重启代理Update Kafka configuration to use SSL and restart brokers

现已为每个 Kafka 代理设置了密钥存储和信任存储,并导入了正确的证书。You have now setup each Kafka broker with a keystore and truststore, and imported the correct certificates. 接下来,请使用 Ambari 修改相关的 Kafka 配置属性,然后重启 Kafka 代理。Next, modify related Kafka configuration properties using Ambari and then restart the Kafka brokers.

若要完成配置修改,请按照以下步骤操作:To complete the configuration modification, do the following steps:

  1. 登录到 Azure 门户,并选择你的 Azure HDInsight Apache Kafka 群集。Sign in to the Azure portal and select your Azure HDInsight Apache Kafka cluster.

  2. 单击群集仪表板下面的“Ambari 主页”转到 Ambari UI。 Go to the Ambari UI by clicking Ambari home under Cluster dashboards.

  3. 在“Kafka 代理”下,将 listeners 属性设置为 PLAINTEXT://localhost:9092,SSL://localhost:9093 Under Kafka Broker set the listeners property to PLAINTEXT://localhost:9092,SSL://localhost:9093

  4. 在“高级 kafka-broker”下,将 security.inter.broker.protocol 属性设置为 SSL Under Advanced kafka-broker set the security.inter.broker.protocol property to SSL

    在 Ambari 中编辑 Kafka ssl 配置属性

  5. 在“自定义 kafka-broker”下,将 ssl.client.auth 属性设置为 requiredUnder Custom kafka-broker set the ssl.client.auth property to required. 仅当同时设置了身份验证和加密时,才需要执行此步骤。This step is only required if you are setting up authentication and encryption.

    在 Ambari 中编辑 kafka ssl 配置属性

  6. 将新的配置属性添加到 server.properties 文件中。Add new configuration properties to the server.properties file.

    # Configure Kafka to advertise IP addresses instead of FQDN
    IP_ADDRESS=$(hostname -i)
    echo advertised.listeners=$IP_ADDRESS
    sed -i.bak -e '/advertised/{/advertised@/!d;}' /usr/hdp/current/kafka-broker/conf/server.properties
    echo "advertised.listeners=PLAINTEXT://$IP_ADDRESS:9092,SSL://$IP_ADDRESS:9093" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.keystore.location=/home/sshuser/ssl/kafka.server.keystore.jks" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.keystore.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.key.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.truststore.location=/home/sshuser/ssl/kafka.server.truststore.jks" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.truststore.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties
    
  7. 转到 Ambari 配置 UI,确认新属性是否显示在“高级 kafka-env”和“kafka-env 模板”属性下。 Go to Ambari configuration UI and verify that the new properties show up under Advanced kafka-env and the kafka-env template property.

    对于 HDI 版本 3.6:For HDI version 3.6:

    在 Ambari 中编辑“kafka-env template”属性

    对于 HDI 版本 4.0:For HDI version 4.0:

    在 Ambari 4 中编辑“kafka-env template”属性

  8. 重启所有 Kafka 代理。Restart all Kafka brokers.

  9. 使用生成方和使用方选项启动管理客户端,以验证生成方和使用方是否在端口 9093 上运行。Start the admin client with producer and consumer options to verify that both producers and consumers are working on port 9093.

客户端设置(不使用身份验证)Client setup (without authentication)

如果不需要身份验证,仅设置 SSL 加密的步骤概括如下:If you don't need authentication, the summary of the steps to set up only SSL encryption are:

  1. 登录到 CA(活动头节点)。Sign in to the CA (active head node).
  2. 将 CA 证书从 CA 计算机 (wn0) 复制到客户端计算机。Copy the CA cert to client machine from the CA machine (wn0).
  3. 登录到客户端计算机 (hn1),并导航到 ~/ssl 文件夹。Sign in to the client machine (hn1) and navigate to the ~/ssl folder.
  4. 将 CA 证书导入信任存储。Import the CA cert to the truststore.
  5. 将 CA 证书导入密钥存储。Import the CA cert to the keystore.

以下代码片段中详细说明了这些步骤。These steps are detailed in the following code snippets.

  1. 登录到 CA 节点。Sign in to the CA node.

    ssh sshuser@HeadNode0_Name
    cd ssl
    
  2. 将 CA 证书复制到客户端计算机Copy the ca-cert to the client machine

    scp ca-cert sshuser@HeadNode1_Name:~/ssl/ca-cert
    
  3. 登录到客户端计算机(待机头节点)。Sign in to the client machine (standby head node).

    ssh sshuser@HeadNode1_Name
    cd ssl
    
  4. 将 CA 证书导入信任存储。Import the CA certificate to the truststore.

    keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
    
  5. 将 CA 证书导入密钥存储。Import the CA cert to keystore.

    keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
    
  6. 创建文件 client-ssl-auth.propertiesCreate the file client-ssl-auth.properties. 该文件应包含以下行:It should have the following lines:

    security.protocol=SSL
    ssl.truststore.location=/home/sshuser/ssl/kafka.client.truststore.jks
    ssl.truststore.password=MyClientPassword123
    

客户端设置(使用身份验证)Client setup (with authentication)

备注

仅当同时设置了 SSL 加密身份验证时,才需要执行以下步骤。The following steps are required only if you are setting up both SSL encryption and authentication. 如果仅设置加密,请参阅不使用身份验证的客户端设置If you are only setting up encryption, then see Client setup without authentication.

以下四个步骤汇总了完成客户端安装所要执行的任务:The following four steps summarize the tasks needed to complete the client setup:

  1. 登录到客户端计算机(待机头节点)。Sign in to the client machine (standby head node).
  2. 创建 Java 密钥存储并获取代理的已签名证书。Create a java keystore and get a signed certificate for the broker. 然后将该证书复制到运行 CA 的 VM。Then copy the certificate to the VM where the CA is running.
  3. 切换到 CA 计算机(活动头节点),为客户端证书签名。Switch to the CA machine (active head node) to sign the client certificate.
  4. 转到客户端计算机(待机头节点),并导航到 ~/ssl 文件夹。Go to the client machine (standby head node) and navigate to the ~/ssl folder. 将已签名的证书复制到客户端计算机。Copy the signed cert to client machine.

下面提供了每个步骤的详细信息。The details of each step are given below.

  1. 登录到客户端计算机(待机头节点)。Sign in to the client machine (standby head node).

    ssh sshuser@HeadNode1_Name
    
  2. 删除任何现有的 ssl 目录。Remove any existing ssl directory.

    rm -R ~/ssl
    mkdir ssl
    cd ssl
    
  3. 创建 Java 密钥存储,并创建证书签名请求。Create a java keystore and create a certificate signing request.

    keytool -genkey -keystore kafka.client.keystore.jks -validity 365 -storepass "MyClientPassword123" -keypass "MyClientPassword123" -dname "CN=HEADNODE1_FQDN" -storetype pkcs12
    
    keytool -keystore kafka.client.keystore.jks -certreq -file client-cert-sign-request -storepass "MyClientPassword123" -keypass "MyClientPassword123"
    
  4. 将证书签名请求复制到 CACopy the certificate signing request to the CA

    scp client-cert-sign-request sshuser@HeadNode0_Name:~/ssl/client-cert-sign-request
    
  5. 切换到 CA 计算机(活动头节点),并为客户端证书签名。Switch to the CA machine (active head node) and sign the client certificate.

    ssh sshuser@HeadNode0_Name
    cd ssl
    openssl x509 -req -CA ca-cert -CAkey ca-key -in ~/ssl/client-cert-sign-request -out ~/ssl/client-cert-signed -days 365 -CAcreateserial -passin pass:MyClientPassword123
    
  6. 将签名的客户端证书从 CA(活动头节点)复制到客户端计算机。Copy signed client cert from the CA (active head node) to client machine.

    scp client-cert-signed sshuser@HeadNode1_Name:~/ssl/client-signed-cert
    
  7. 将 CA 证书复制到客户端计算机Copy the ca-cert to the client machine

    scp ca-cert sshuser@HeadNode1_Name:~/ssl/ca-cert
    
  8. 创建包含已签名证书的客户端存储,将 CA 证书导入密钥存储和信任存储:Create client store with signed cert, and import ca cert into the keystore and truststore:

    keytool -keystore kafka.client.keystore.jks -import -file client-cert-signed -storepass MyClientPassword123 -keypass MyClientPassword123 -noprompt
    
    keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass MyClientPassword123 -keypass MyClientPassword123 -noprompt
    
    keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass MyClientPassword123 -keypass MyClientPassword123 -noprompt
    
  9. 创建文件 client-ssl-auth.propertiesCreate a file client-ssl-auth.properties. 该文件应包含以下行:It should have the following lines:

security.protocol=SSL
ssl.truststore.location=/home/sshuser/ssl/kafka.client.truststore.jks
ssl.truststore.password=MyClientPassword123
ssl.keystore.location=/home/sshuser/ssl/kafka.client.keystore.jks
ssl.keystore.password=MyClientPassword123
ssl.key.password=MyClientPassword123

验证Verification

备注

如果已安装 HDInsight 4.0 和 Kafka 2.1,可以使用控制台生成端/使用端来验证设置。If HDInsight 4.0 and Kafka 2.1 is installed, you can use the console producer/consumers to verify your setup. 如果未安装,请在端口 9092 上运行 Kafka 生成端并向主题发送消息,然后在端口 9093 上运行使用 SSL 的 Kafka 使用端。If not, run the Kafka producer on port 9092 and send messages to the topic, and then use the Kafka consumer on port 9093 which uses SSL.

Kafka 2.1 或更高版本Kafka 2.1 or above

  1. 如果主题尚不存在,请创建一个主题。Create a topic if it doesn't exist already.

    /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE>:2181 --create --topic topic1 --partitions 2 --replication-factor 2
    
  2. 启动控制台生成端,并以配置文件的形式为生成端提供 client-ssl-auth.properties 的路径。Start console producer and provide the path to client-ssl-auth.properties as a configuration file for the producer.

    /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <FQDN_WORKER_NODE>:9093 --topic topic1 --producer.config ~/ssl/client-ssl-auth.properties
    
  3. 与客户端计算机建立另一个 SSH 连接,启动控制台使用端,并以配置文件的形式为使用端提供 client-ssl-auth.properties 的路径。Open another ssh connection to client machine and start console consumer and provide the path to client-ssl-auth.properties as a configuration file for the consumer.

    /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning
    

Kafka 1.1Kafka 1.1

  1. 如果主题尚不存在,请创建一个主题。Create a topic if it doesn't exist already.

    /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE_0>:2181 --create --topic topic1 --partitions 2 --replication-factor 2
    
  2. 启动控制台生成端,并以配置文件的形式为生成端提供 client-ssl-auth.properties 的路径。Start console producer and provide the path to client-ssl-auth.properties as a configuration file for the producer.

    /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <FQDN_WORKER_NODE>:9092 --topic topic1 
    
  3. 与客户端计算机建立另一个 SSH 连接,启动控制台使用端,并以配置文件的形式为使用端提供 client-ssl-auth.properties 的路径。Open another ssh connection to client machine and start console consumer and provide the path to client-ssl-auth.properties as a configuration file for the consumer.

    $ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning
    

后续步骤Next steps