快速入门:使用人脸 REST API 和 Python 检测图像中的人脸Quickstart: Detect faces in an image using the Face REST API and Python

本快速入门将通过 Python 使用 Azure 人脸 REST API 来检测图像中的人脸。In this quickstart, you'll use the Azure Face REST API with Python to detect human faces in an image. 此脚本会在图像上围绕人脸绘制相应的框,并添加性别和年龄信息。The script will draw frames around the faces and superimpose gender and age information on the image.

一位男士和一位女士,在图像中,每一位的面部都绘制了矩形并显示了年龄和性别

如果没有 Azure 订阅,可在开始前创建一个试用帐户If you don't have an Azure subscription, create a Trial before you begin.

先决条件Prerequisites

  • Azure 订阅 - 免费创建订阅Azure subscription - Create one for free
  • 拥有 Azure 订阅后,在 Azure 门户中创建人脸资源 ,获取密钥和终结点。Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. 部署后,单击“转到资源”。After it deploys, click Go to resource .
    • 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。You will need the key and endpoint from the resource you create to connect your application to the Face API. 你稍后会在快速入门中将密钥和终结点粘贴到下方的代码中。You'll paste your key and endpoint into the code below later in the quickstart.
    • 可以使用免费定价层 (F0) 试用该服务,然后再升级到付费层进行生产。You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

运行 Jupyter NotebookRun the Jupyter notebook

可在 MyBinder 上以 Jupyter 笔记本的方式运行本快速入门。You can run this quickstart as a Jupyter notebook on MyBinder. 若要启动 Binder ,请选择下面的按钮。To launch Binder, select the button below. 然后根据 Notebook 中的说明进行操作。Then follow the instructions in the notebook.

活页夹Binder

创建并运行示例Create and run the sample

也可使用以下步骤从命令行运行本快速入门:Alternately, you can run this quickstart from the command line with the following steps:

  1. 将以下代码复制到文本编辑器中。Copy the following code into a text editor.
  2. 必要时在代码中进行如下更改:Make the following changes in code where needed:
    1. subscription_key 的值替换为你的订阅密钥。Replace the value of subscription_key with your subscription key.
    2. 编辑 face_api_url 的值,以包含人脸 API 资源的终结点 URL。Edit the value of face_api_url to include the endpoint URL for your Face API resource.
    3. (可选)将 image_url 的值替换为要分析的其他图像的 URL。Optionally, replace the value of image_url with the URL of a different image that you want to analyze.
  3. 将代码保存为以 .py 为扩展名的文件。Save the code as a file with an .py extension. 例如,detect-face.pyFor example, detect-face.py.
  4. 打开命令提示符窗口。Open a command prompt window.
  5. 在提示符处,使用 python 命令运行示例。At the prompt, use the python command to run the sample. 例如,python detect-face.pyFor example, python detect-face.py.
import json, os, requests

subscription_key = os.environ['FACE_SUBSCRIPTION_KEY']
assert subscription_key

face_api_url = os.environ['FACE_ENDPOINT'] + '/face/v1.0/detect'

image_url = 'https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/faces.jpg'

headers = {'Ocp-Apim-Subscription-Key': subscription_key}

params = {
    'detectionModel': 'detection_02',
    'returnFaceId': 'true'
}

response = requests.post(face_api_url, params=params,
                         headers=headers, json={"url": image_url})
print(json.dumps(response.json()))

检查响应Examine the response

成功的响应以 JSON 格式返回。A successful response is returned in JSON.

[
  {
    "faceId": "e93e0db1-036e-4819-b5b6-4f39e0f73509",
    "faceRectangle": {
      "top": 621,
      "left": 616,
      "width": 195,
      "height": 195
    }
  }
]

提取人脸属性Extract Face Attributes

若要提取人脸属性,请使用检测模型 1 并添加 returnFaceAttributes 查询参数。To extract face attributes, use detection model 1 and add the returnFaceAttributes query parameter.

params = {
    'detectionModel': 'detection_01',
    'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',
    'returnFaceId': 'true'
}

响应现在包含人脸属性。The response now includes face attributes. 例如:For example:

[
  {
    "faceId": "e93e0db1-036e-4819-b5b6-4f39e0f73509",
    "faceRectangle": {
      "top": 621,
      "left": 616,
      "width": 195,
      "height": 195
    },
    "faceAttributes": {
      "smile": 0,
      "headPose": {
        "pitch": 0,
        "roll": 6.8,
        "yaw": 3.7
      },
      "gender": "male",
      "age": 37,
      "facialHair": {
        "moustache": 0.4,
        "beard": 0.4,
        "sideburns": 0.1
      },
      "glasses": "NoGlasses",
      "emotion": {
        "anger": 0,
        "contempt": 0,
        "disgust": 0,
        "fear": 0,
        "happiness": 0,
        "neutral": 0.999,
        "sadness": 0.001,
        "surprise": 0
      },
      "blur": {
        "blurLevel": "high",
        "value": 0.89
      },
      "exposure": {
        "exposureLevel": "goodExposure",
        "value": 0.51
      },
      "noise": {
        "noiseLevel": "medium",
        "value": 0.59
      },
      "makeup": {
        "eyeMakeup": true,
        "lipMakeup": false
      },
      "accessories": [],
      "occlusion": {
        "foreheadOccluded": false,
        "eyeOccluded": false,
        "mouthOccluded": false
      },
      "hair": {
        "bald": 0.04,
        "invisible": false,
        "hairColor": [
          {
            "color": "black",
            "confidence": 0.98
          },
          {
            "color": "brown",
            "confidence": 0.87
          },
          {
            "color": "gray",
            "confidence": 0.85
          },
          {
            "color": "other",
            "confidence": 0.25
          },
          {
            "color": "blond",
            "confidence": 0.07
          },
          {
            "color": "red",
            "confidence": 0.02
          }
        ]
      }
    }
  },
  {
    "faceId": "37c7c4bc-fda3-4d8d-94e8-b85b8deaf878",
    "faceRectangle": {
      "top": 693,
      "left": 1503,
      "width": 180,
      "height": 180
    },
    "faceAttributes": {
      "smile": 0.003,
      "headPose": {
        "pitch": 0,
        "roll": 2,
        "yaw": -2.2
      },
      "gender": "female",
      "age": 56,
      "facialHair": {
        "moustache": 0,
        "beard": 0,
        "sideburns": 0
      },
      "glasses": "NoGlasses",
      "emotion": {
        "anger": 0,
        "contempt": 0.001,
        "disgust": 0,
        "fear": 0,
        "happiness": 0.003,
        "neutral": 0.984,
        "sadness": 0.011,
        "surprise": 0
      },
      "blur": {
        "blurLevel": "high",
        "value": 0.83
      },
      "exposure": {
        "exposureLevel": "goodExposure",
        "value": 0.41
      },
      "noise": {
        "noiseLevel": "high",
        "value": 0.76
      },
      "makeup": {
        "eyeMakeup": false,
        "lipMakeup": false
      },
      "accessories": [],
      "occlusion": {
        "foreheadOccluded": false,
        "eyeOccluded": false,
        "mouthOccluded": false
      },
      "hair": {
        "bald": 0.06,
        "invisible": false,
        "hairColor": [
          {
            "color": "black",
            "confidence": 0.99
          },
          {
            "color": "gray",
            "confidence": 0.89
          },
          {
            "color": "other",
            "confidence": 0.64
          },
          {
            "color": "brown",
            "confidence": 0.34
          },
          {
            "color": "blond",
            "confidence": 0.07
          },
          {
            "color": "red",
            "confidence": 0.03
          }
        ]
      }
    }
  }
]

后续步骤Next steps

接下来,请浏览人脸 API 参考文档,详细了解支持的方案。Next, explore the Face API reference documentation to learn more about the supported scenarios.