快速入门:使用人脸客户端库
开始使用适用于 .NET 的人脸客户端库进行人脸识别。 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。 按照以下步骤安装包,并尝试使用远程图像进行基本面部识别的示例代码。
先决条件
- Azure 订阅 - 创建试用订阅
- Visual Studio IDE 或最新版本的 .NET Core。
- 你的 Azure 帐户必须分配有认知服务参与者角色,你才能同意负责的 AI 条款并创建资源。 若要将此角色分配给你的帐户,请按照分配角色文档中的步骤进行操作,或与管理员联系。
- 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,单击“转到资源”。
- 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。 你稍后会在快速入门中将密钥和终结点粘贴到下方的代码中。
- 可以使用免费定价层 (
F0
) 试用该服务,然后再升级到付费层进行生产。
标识人脸
新建 C# 应用程序
使用 Visual Studio 创建新的 .NET Core 应用程序。
安装客户端库
创建新项目后,右键单击“解决方案资源管理器”中的项目解决方案,然后选择“管理 NuGet 包”,以安装客户端库 。 在打开的包管理器中,选择“浏览”,选中“包括预发行版”并搜索
Microsoft.Azure.CognitiveServices.Vision.Face
。 选择版本2.7.0-preview.1
,然后选择“安装”。将以下代码添加到 Program.cs 文件。
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
namespace FaceQuickstart
{
class Program
{
static string personGroupId = Guid.NewGuid().ToString();
// URL path for the images.
const string IMAGE_BASE_URL = "https://csdx.blob.core.chinacloudapi.cn/resources/Face/Images/";
// From your Face subscription in the Azure portal, get your subscription key and endpoint.
const string SUBSCRIPTION_KEY = "PASTE_YOUR_FACE_SUBSCRIPTION_KEY_HERE";
const string ENDPOINT = "PASTE_YOUR_FACE_ENDPOINT_HERE";
static void Main(string[] args)
{
// Recognition model 4 was released in 2021 February.
// It is recommended since its accuracy is improved
// on faces wearing masks compared with model 3,
// and its overall accuracy is improved compared
// with models 1 and 2.
const string RECOGNITION_MODEL4 = RecognitionModel.Recognition04;
// Authenticate.
IFaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);
// Identify - recognize a face(s) in a person group (a person group is created in this example).
IdentifyInPersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL4).Wait();
Console.WriteLine("End of quickstart.");
}
/*
* AUTHENTICATE
* Uses subscription key and region to create a client.
*/
public static IFaceClient Authenticate(string endpoint, string key)
{
return new FaceClient(new ApiKeyServiceClientCredentials(key)) { Endpoint = endpoint };
}
// Detect faces from image url for recognition purpose. This is a helper method for other functions in this quickstart.
// Parameter `returnFaceId` of `DetectWithUrlAsync` must be set to `true` (by default) for recognition purpose.
// Parameter `FaceAttributes` is set to include the QualityForRecognition attribute.
// Recognition model must be set to recognition_03 or recognition_04 as a result.
// Result faces with insufficient quality for recognition are filtered out.
// The field `faceId` in returned `DetectedFace`s will be used in Face - Find Similar, Face - Verify. and Face - Identify.
// It will expire 24 hours after the detection call.
private static async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string recognition_model)
{
// Detect faces from image URL. Since only recognizing, use the recognition model 1.
// We use detection model 3 because we are not retrieving attributes.
IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: recognition_model, detectionModel: DetectionModel.Detection03, FaceAttributes: new List<FaceAttributeType> { FaceAttributeType.QualityForRecognition });
List<DetectedFace> sufficientQualityFaces = new List<DetectedFace>();
foreach (DetectedFace detectedFace in detectedFaces){
var faceQualityForRecognition = detectedFace.FaceAttributes.QualityForRecognition;
if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value >= QualityForRecognition.Medium)){
sufficientQualityFaces.Add(detectedFace);
}
}
Console.WriteLine($"{detectedFaces.Count} face(s) with {sufficientQualityFaces.Count} having sufficient quality for recognition detected from image `{Path.GetFileName(url)}`");
return sufficientQualityFaces.ToList();
}
/*
* IDENTIFY FACES
* To identify faces, you need to create and define a person group.
* The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a PersonGroup and returns
* a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects,
* which have a prediction confidence value.
*/
public static async Task IdentifyInPersonGroup(IFaceClient client, string url, string recognitionModel)
{
Console.WriteLine("========IDENTIFY FACES========");
Console.WriteLine();
// Create a dictionary for all your images, grouping similar ones under the same key.
Dictionary<string, string[]> personDictionary =
new Dictionary<string, string[]>
{ { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
{ "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
{ "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } },
{ "Family1-Daughter", new[] { "Family1-Daughter1.jpg", "Family1-Daughter2.jpg" } },
{ "Family2-Lady", new[] { "Family2-Lady1.jpg", "Family2-Lady2.jpg" } },
{ "Family2-Man", new[] { "Family2-Man1.jpg", "Family2-Man2.jpg" } }
};
// A group photo that includes some of the persons you seek to identify from your dictionary.
string sourceImageFileName = "identification1.jpg";
// Create a person group.
Console.WriteLine($"Create a person group ({personGroupId}).");
await client.PersonGroup.CreateAsync(personGroupId, personGroupId, recognitionModel: recognitionModel);
// The similar faces will be grouped into a single person group person.
foreach (var groupedFace in personDictionary.Keys)
{
// Limit TPS
await Task.Delay(250);
Person person = await client.PersonGroupPerson.CreateAsync(personGroupId: personGroupId, name: groupedFace);
Console.WriteLine($"Create a person group person '{groupedFace}'.");
// Add face to the person group person.
foreach (var similarImage in personDictionary[groupedFace])
{
Console.WriteLine($"Check whether image is of sufficient quality for recognition");
IList<DetectedFace> detectedFaces = await client.Face.DetectWithUrlAsync($"{url}{similarImage}",
recognitionModel: recognition_model,
detectionModel: DetectionModel.Detection03,
returnFaceAttributes: new List<FaceAttributeType> { FaceAttributeType.QualityForRecognition });
bool sufficientQuality = true;
foreach (var face in detectedFaces)
{
var faceQualityForRecognition = face.FaceAttributes.QualityForRecognition;
// Only "high" quality images are recommended for person enrollment
if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.High)){
sufficientQuality = false;
break;
}
}
if (!sufficientQuality){
continue;
}
Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
PersistedFace face = await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, person.PersonId,
$"{url}{similarImage}", similarImage);
}
}
// Start to train the person group.
Console.WriteLine();
Console.WriteLine($"Train person group {personGroupId}.");
await client.PersonGroup.TrainAsync(personGroupId);
// Wait until the training is completed.
while (true)
{
await Task.Delay(1000);
var trainingStatus = await client.PersonGroup.GetTrainingStatusAsync(personGroupId);
Console.WriteLine($"Training status: {trainingStatus.Status}.");
if (trainingStatus.Status == TrainingStatusType.Succeeded) { break; }
}
Console.WriteLine();
List<Guid> sourceFaceIds = new List<Guid>();
// Detect faces from source image url.
List<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);
// Add detected faceId to sourceFaceIds.
foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }
// Identify the faces in a person group.
var identifyResults = await client.Face.IdentifyAsync(sourceFaceIds, personGroupId);
foreach (var identifyResult in identifyResults)
{
if (identifyResult.Candidates.Count==0) {
Console.WriteLine($"No person is identified for the face in: {sourceImageFileName} - {identifyResult.FaceId},");
continue;
}
Person person = await client.PersonGroupPerson.GetAsync(personGroupId, identifyResult.Candidates[0].PersonId);
Console.WriteLine($"Person '{person.Name}' is identified for the face in: {sourceImageFileName} - {identifyResult.FaceId}," +
$" confidence: {identifyResult.Candidates[0].Confidence}.");
}
Console.WriteLine();
}
}
}
在相应的字段中输入密钥和终结点。
重要
转到 Azure 门户。 如果你在“先决条件”部分创建的人脸资源部署成功,请单击“后续步骤”下的“转到资源”按钮 。 在资源的“密钥和终结点”页的“资源管理”下可以找到密钥和终结点 。
重要
完成后,请记住将密钥从代码中删除,并且永远不要公开发布该密钥。 对于生产环境,请考虑使用安全的方法来存储和访问凭据。 有关详细信息,请参阅认知服务安全性文章。
运行应用程序
单击 IDE 窗口顶部的“调试”按钮,运行应用程序。
输出
========IDENTIFY FACES========
Create a person group (3972c063-71b3-4328-8579-6d190ee76f99).
Create a person group person 'Family1-Dad'.
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`
Create a person group person 'Family1-Daughter'.
Create a person group person 'Family2-Lady'.
Add face to the person group person(Family2-Lady) from image `Family2-Lady1.jpg`
Add face to the person group person(Family2-Lady) from image `Family2-Lady2.jpg`
Create a person group person 'Family2-Man'.
Add face to the person group person(Family2-Man) from image `Family2-Man1.jpg`
Add face to the person group person(Family2-Man) from image `Family2-Man2.jpg`
Train person group 3972c063-71b3-4328-8579-6d190ee76f99.
Training status: Succeeded.
4 face(s) with 4 having sufficient quality for recognition detected from image `identification1.jpg`
Person 'Family1-Dad' is identified for face in: identification1.jpg - 994bfd7a-0d8f-4fae-a5a6-c524664cbee7, confidence: 0.96725.
Person 'Family1-Mom' is identified for face in: identification1.jpg - 0c9da7b9-a628-429d-97ff-cebe7c638fb5, confidence: 0.96921.
No person is identified for face in: identification1.jpg - a881259c-e811-4f7e-a35e-a453e95ca18f,
Person 'Family1-Son' is identified for face in: identification1.jpg - 53772235-8193-46eb-bdfc-1ebc25ea062e, confidence: 0.92886.
End of quickstart.
提示
人脸 API 在一组预构建的模型呢上运行,这些模型在本质上是静态的(模型的性能不会因为运行服务而提高或降低)。 如果 Microsoft 更新模型的后端,但不迁移整个新模型版本,那么模型生成的结果可能会变化。 若要使用更新的模型版本,可重新训练 PersonGroup,将更新的模型指定为具有相同注册映像的参数。
清理资源
如果想要清理并删除认知服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。
若要删除在本快速入门中创建的 PersonGroup,请在程序中运行以下代码:
// At end, delete person groups in both regions (since testing only)
Console.WriteLine("========DELETE PERSON GROUP========");
Console.WriteLine();
DeletePersonGroup(client, personGroupId).Wait();
使用以下代码定义删除方法:
/*
* DELETE PERSON GROUP
* After this entire example is executed, delete the person group in your Azure account,
* otherwise you cannot recreate one with the same name (if running example repeatedly).
*/
public static async Task DeletePersonGroup(IFaceClient client, String personGroupId)
{
await client.PersonGroup.DeleteAsync(personGroupId);
Console.WriteLine($"Deleted the person group {personGroupId}.");
}
后续步骤
在本快速入门中,你已了解如何使用适用于 .NET 的人脸客户端库来执行基本人脸识别。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。
快速入门:适用于 JavaScript 的人脸客户端库
开始使用适用于 JavaScript 的人脸客户端库进行人脸识别。 请按照以下步骤安装程序包并试用基本任务的示例代码。 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。 按照以下步骤安装包,并尝试使用远程图像进行基本面部识别的示例代码。
先决条件
- Azure 订阅 - 创建试用订阅
- 最新版本的 Node.js
- 你的 Azure 帐户必须分配有认知服务参与者角色,你才能同意负责的 AI 条款并创建资源。 若要将此角色分配给你的帐户,请按照分配角色文档中的步骤进行操作,或与管理员联系。
- 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,单击“转到资源”。
- 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。 你稍后会在快速入门中将密钥和终结点粘贴到下方的代码中。
- 可以使用免费定价层 (
F0
) 试用该服务,然后再升级到付费层进行生产。
标识人脸
创建新的 Node.js 应用程序
在控制台窗口(例如 cmd、PowerShell 或 Bash)中,为应用创建一个新目录并导航到该目录。
mkdir myapp && cd myapp
运行
npm init
命令以使用package.json
文件创建一个 node 应用程序。npm init
安装
ms-rest-azure
和azure-cognitiveservices-face
NPM 包:npm install @azure/cognitiveservices-face @azure/ms-rest-js uuid
应用的
package.json
文件将使用依赖项进行更新。创建名为
index.js
的文件,在文本编辑器中打开该文件,并粘贴以下代码:
'use strict';
const msRest = require("@azure/ms-rest-js");
const Face = require("@azure/cognitiveservices-face");
const { v4: uuid } = require('uuid');
key = "PASTE_YOUR_FACE_SUBSCRIPTION_KEY_HERE";
endpoint = "PASTE_YOUR_FACE_ENDPOINT_HERE";
const credentials = new msRest.ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } });
const client = new Face.FaceClient(credentials, endpoint);
const image_base_url = "https://csdx.blob.core.chinacloudapi.cn/resources/Face/Images/";
const person_group_id = uuid();
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
async function DetectFaceRecognize(url) {
// Detect faces from image URL. Since only recognizing, use the recognition model 4.
// We use detection model 3 because we are only retrieving the qualityForRecognition attribute.
// Result faces with quality for recognition lower than "medium" are filtered out.
let detected_faces = await client.face.detectWithUrl(url,
{
detectionModel: "detection_03",
recognitionModel: "recognition_04",
returnFaceAttributes: ["QualityForRecognition"]
});
return detected_faces.filter(face => face.faceAttributes.qualityForRecognition == 'high' || face.faceAttributes.qualityForRecognition == 'medium');
}
async function AddFacesToPersonGroup(person_dictionary, person_group_id) {
console.log ("Adding faces to person group...");
// The similar faces will be grouped into a single person group person.
await Promise.all (Object.keys(person_dictionary).map (async function (key) {
const value = person_dictionary[key];
// Wait briefly so we do not exceed rate limits.
await sleep (2000);
let person = await client.personGroupPerson.create(person_group_id, { name : key });
console.log("Create a persongroup person: " + key + ".");
// Add faces to the person group person.
await Promise.all (value.map (async function (similar_image) {
// Check if the image is of sufficent quality for recognition.
let sufficientQuality = true;
let detected_faces = await client.face.detectWithUrl(image_base_url + similar_image,
{
returnFaceAttributes: ["QualityForRecognition"],
detectionModel: "detection_03",
recognitionModel: "recognition_03"
});
detected_faces.forEach(detected_face => {
if (detected_face.faceAttributes.qualityForRecognition != 'high'){
sufficientQuality = false;
}
});
// Quality is sufficent, add to group.
if (sufficientQuality){
console.log("Add face to the person group person: (" + key + ") from image: " + similar_image + ".");
await client.personGroupPerson.addFaceFromUrl(person_group_id, person.personId, image_base_url + similar_image);
}
}));
}));
console.log ("Done adding faces to person group.");
}
async function WaitForPersonGroupTraining(person_group_id) {
// Wait so we do not exceed rate limits.
console.log ("Waiting 10 seconds...");
await sleep (10000);
let result = await client.personGroup.getTrainingStatus(person_group_id);
console.log("Training status: " + result.status + ".");
if (result.status !== "succeeded") {
await WaitForPersonGroupTraining(person_group_id);
}
}
/* NOTE This function might not work with the free tier of the Face service
because it might exceed the rate limits. If that happens, try inserting calls
to sleep() between calls to the Face service.
*/
async function IdentifyInPersonGroup() {
console.log("========IDENTIFY FACES========");
console.log();
// Create a dictionary for all your images, grouping similar ones under the same key.
const person_dictionary = {
"Family1-Dad" : ["Family1-Dad1.jpg", "Family1-Dad2.jpg"],
"Family1-Mom" : ["Family1-Mom1.jpg", "Family1-Mom2.jpg"],
"Family1-Son" : ["Family1-Son1.jpg", "Family1-Son2.jpg"],
"Family1-Daughter" : ["Family1-Daughter1.jpg", "Family1-Daughter2.jpg"],
"Family2-Lady" : ["Family2-Lady1.jpg", "Family2-Lady2.jpg"],
"Family2-Man" : ["Family2-Man1.jpg", "Family2-Man2.jpg"]
};
// A group photo that includes some of the persons you seek to identify from your dictionary.
let source_image_file_name = "identification1.jpg";
// Create a person group.
console.log("Creating a person group with ID: " + person_group_id);
await client.personGroup.create(person_group_id, person_group_id, {recognitionModel : "recognition_04" });
await AddFacesToPersonGroup(person_dictionary, person_group_id);
// Start to train the person group.
console.log();
console.log("Training person group: " + person_group_id + ".");
await client.personGroup.train(person_group_id);
await WaitForPersonGroupTraining(person_group_id);
console.log();
// Detect faces from source image url and only take those with sufficient quality for recognition.
let face_ids = (await DetectFaceRecognize(image_base_url + source_image_file_name)).map (face => face.faceId);
// Identify the faces in a person group.
let results = await client.face.identify(face_ids, { personGroupId : person_group_id});
await Promise.all (results.map (async function (result) {
let person = await client.personGroupPerson.get(person_group_id, result.candidates[0].personId);
console.log("Person: " + person.name + " is identified for face in: " + source_image_file_name + " with ID: " + result.faceId + ". Confidence: " + result.candidates[0].confidence + ".");
}));
console.log();
}
async function main() {
await IdentifyInPersonGroup();
console.log ("Done.");
}
main();
在相应的字段中输入订阅密钥和终结点。
重要
转到 Azure 门户。 如果你在“先决条件”部分创建的人脸资源部署成功,请单击“后续步骤”下的“转到资源”按钮 。 在资源的“密钥和终结点”页的“资源管理”下可以找到密钥和终结点 。
重要
完成后,请记住将密钥从代码中删除,并且永远不要公开发布该密钥。 对于生产环境,请考虑使用安全的方法来存储和访问凭据。 有关详细信息,请参阅认知服务安全性文章。
在快速入门文件中使用
node
命令运行应用程序。node index.js
输出
========IDENTIFY FACES========
Creating a person group with ID: c08484e0-044b-4610-8b7e-c957584e5d2d
Adding faces to person group...
Create a persongroup person: Family1-Dad.
Create a persongroup person: Family1-Mom.
Create a persongroup person: Family2-Lady.
Create a persongroup person: Family1-Son.
Create a persongroup person: Family1-Daughter.
Create a persongroup person: Family2-Man.
Add face to the person group person: (Family1-Son) from image: Family1-Son2.jpg.
Add face to the person group person: (Family1-Dad) from image: Family1-Dad2.jpg.
Add face to the person group person: (Family1-Mom) from image: Family1-Mom1.jpg.
Add face to the person group person: (Family2-Man) from image: Family2-Man1.jpg.
Add face to the person group person: (Family1-Son) from image: Family1-Son1.jpg.
Add face to the person group person: (Family2-Lady) from image: Family2-Lady2.jpg.
Add face to the person group person: (Family1-Mom) from image: Family1-Mom2.jpg.
Add face to the person group person: (Family1-Dad) from image: Family1-Dad1.jpg.
Add face to the person group person: (Family2-Man) from image: Family2-Man2.jpg.
Add face to the person group person: (Family2-Lady) from image: Family2-Lady1.jpg.
Done adding faces to person group.
Training person group: c08484e0-044b-4610-8b7e-c957584e5d2d.
Waiting 10 seconds...
Training status: succeeded.
Person: Family1-Mom is identified for face in: identification1.jpg with ID: b7f7f542-c338-4a40-ad52-e61772bc6e14. Confidence: 0.96921.
Person: Family1-Son is identified for face in: identification1.jpg with ID: 600dc1b4-b2c4-4516-87de-edbbdd8d7632. Confidence: 0.92886.
Person: Family1-Dad is identified for face in: identification1.jpg with ID: e83b494f-9ad2-473f-9d86-3de79c01e345. Confidence: 0.96725.
清理资源
如果想要清理并删除认知服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。
后续步骤
在本快速入门中,你已了解如何使用适用于 JavaScript 的人脸客户端库来执行基本人脸识别。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。
开始使用适用于 Python 的人脸客户端库进行人脸识别。 请按照以下步骤安装程序包并试用基本任务的示例代码。 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。 按照以下步骤安装包,并尝试使用远程图像进行基本面部识别的示例代码。
先决条件
- Azure 订阅 - 创建试用订阅
- Python 3.x
- 你的 Python 安装应包含 pip。 可以通过在命令行上运行
pip --version
来检查是否安装了 pip。 通过安装最新版本的 Python 获取 pip。
- 你的 Python 安装应包含 pip。 可以通过在命令行上运行
- 你的 Azure 帐户必须分配有认知服务参与者角色,你才能同意负责的 AI 条款并创建资源。 若要将此角色分配给你的帐户,请按照分配角色文档中的步骤进行操作,或与管理员联系。
- 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,单击“转到资源”。
- 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。 你稍后会在快速入门中将密钥和终结点粘贴到下方的代码中。
- 可以使用免费定价层 (
F0
) 试用该服务,然后再升级到付费层进行生产。
标识人脸
安装客户端库
在安装 Python 后,可以通过以下命令安装客户端库:
pip install --upgrade azure-cognitiveservices-vision-face
创建新的 Python 应用程序
创建新的 Python 脚本,例如 quickstart-file.py。 然后在偏好的编辑器或 IDE 中打开它,并粘贴以下代码。
import asyncio
import io
import glob
import os
import sys
import time
import uuid
import requests
from urllib.parse import urlparse
from io import BytesIO
# To install this module, run:
# python -m pip install Pillow
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, QualityForRecognition
# This key will serve all examples in this document.
KEY = "PASTE_YOUR_FACE_SUBSCRIPTION_KEY_HERE"
# This endpoint will be used in all examples in this quickstart.
ENDPOINT = "PASTE_YOUR_FACE_ENDPOINT_HERE"
# Base url for the Verify and Facelist/Large Facelist operations
IMAGE_BASE_URL = 'https://csdx.blob.core.chinacloudapi.cn/resources/Face/Images/'
# Used in the Person Group Operations and Delete Person Group examples.
# You can call list_person_groups to print a list of preexisting PersonGroups.
# SOURCE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)
# Used for the Delete Person Group example.
TARGET_PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)
# Create an authenticated FaceClient.
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
'''
Detect faces
Detect faces in two images (get ID), draw rectangle around a third image.
'''
print('-----------------------------')
print()
print('DETECT FACES')
print()
# Detect a face in an image that contains a single face
single_face_image_url = 'https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg'
single_image_name = os.path.basename(single_face_image_url)
# We use detection model 3 to get better performance.
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03')
if not detected_faces:
raise Exception('No face detected from image {}'.format(single_image_name))
# Display the detected face ID in the first single-face image.
# Face IDs are used for comparison to faces (their IDs) detected in other images.
print('Detected face ID from', single_image_name, ':')
for face in detected_faces: print (face.face_id)
print()
# Save this ID for use in Find Similar
first_image_face_ID = detected_faces[0].face_id
# Detect the faces in an image that contains multiple faces
# Each detected face gets assigned a new ID
multi_face_image_url = "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
multi_image_name = os.path.basename(multi_face_image_url)
# We use detection model 3 to get better performance.
detected_faces2 = face_client.face.detect_with_url(url=multi_face_image_url, detection_model='detection_03')
print('Detected face IDs from', multi_image_name, ':')
if not detected_faces2:
raise Exception('No face detected from image {}.'.format(multi_image_name))
else:
for face in detected_faces2:
print(face.face_id)
print()
'''
Print image and draw rectangles around faces
'''
# Detect a face in an image that contains a single face
single_face_image_url = 'https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg'
single_image_name = os.path.basename(single_face_image_url)
# We use detection model 3 to get better performance.
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03')
if not detected_faces:
raise Exception('No face detected from image {}'.format(single_image_name))
# Convert width height to a point in a rectangle
def getRectangle(faceDictionary):
rect = faceDictionary.face_rectangle
left = rect.left
top = rect.top
right = left + rect.width
bottom = top + rect.height
return ((left, top), (right, bottom))
def drawFaceRectangles() :
# Download the image from the url
response = requests.get(single_face_image_url)
img = Image.open(BytesIO(response.content))
# For each face returned use the face rectangle and draw a red box.
print('Drawing rectangle around face... see popup for results.')
draw = ImageDraw.Draw(img)
for face in detected_faces:
draw.rectangle(getRectangle(face), outline='red')
# Display the image in the default image browser.
img.show()
# Uncomment this to show the face rectangles.
# drawFaceRectangles()
print()
'''
END - Detect faces
'''
'''
Create/Train/Detect/Identify Person Group
This example creates a Person Group, then trains it. It can then be used to detect and identify faces in other group images.
'''
print('-----------------------------')
print()
print('PERSON GROUP OPERATIONS')
print()
'''
Create the PersonGroup
'''
# Create empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
print('Person group:', PERSON_GROUP_ID)
face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID)
# Define woman friend
woman = face_client.person_group_person.create(PERSON_GROUP_ID, "Woman")
# Define man friend
man = face_client.person_group_person.create(PERSON_GROUP_ID, "Man")
# Define child friend
child = face_client.person_group_person.create(PERSON_GROUP_ID, "Child")
'''
Detect faces and register to correct person
'''
# Find all jpeg images of friends in working directory
woman_images = [file for file in glob.glob('*.jpg') if file.startswith("w")]
man_images = [file for file in glob.glob('*.jpg') if file.startswith("m")]
child_images = [file for file in glob.glob('*.jpg') if file.startswith("ch")]
# Add to a woman person
for image in woman_images:
w = open(image, 'r+b')
# Check if the image is of sufficent quality for recognition.
sufficientQuality = True
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
for face in detected_faces:
if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
sufficientQuality = False
break
if not sufficientQuality: continue
face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, woman.person_id, w)
# Add to a man person
for image in man_images:
m = open(image, 'r+b')
# Check if the image is of sufficent quality for recognition.
sufficientQuality = True
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
for face in detected_faces:
if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
sufficientQuality = False
break
if not sufficientQuality: continue
face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, man.person_id, m)
# Add to a child person
for image in child_images:
ch = open(image, 'r+b')
# Check if the image is of sufficent quality for recognition.
sufficientQuality = True
detected_faces = face_client.face.detect_with_url(url=single_face_image_url, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
for face in detected_faces:
if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
sufficientQuality = False
break
if not sufficientQuality: continue
face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, child.person_id, ch)
'''
Train PersonGroup
'''
print()
print('Training the person group...')
# Train the person group
face_client.person_group.train(PERSON_GROUP_ID)
while (True):
training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
print("Training status: {}.".format(training_status.status))
print()
if (training_status.status is TrainingStatusType.succeeded):
break
elif (training_status.status is TrainingStatusType.failed):
face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
sys.exit('Training the person group has failed.')
time.sleep(5)
'''
Identify a face against a defined PersonGroup
'''
# Group image for testing against
test_image_array = glob.glob('test-image-person-group.jpg')
image = open(test_image_array[0], 'r+b')
print('Pausing for 60 seconds to avoid triggering rate limit on Trial...')
time.sleep (60)
# Detect faces
face_ids = []
# We use detection model 3 to get better performance, recognition model 4 to support quality for recognition attribute.
faces = face_client.face.detect_with_stream(image, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
for face in faces:
# Only take the face if it is of sufficient quality.
if face.face_attributes.quality_for_recognition == QualityForRecognition.high or face.face_attributes.quality_for_recognition == QualityForRecognition.medium:
face_ids.append(face.face_id)
# Identify faces
results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
print('Identifying faces in {}'.format(os.path.basename(image.name)))
if not results:
print('No person identified in the person group for faces from {}.'.format(os.path.basename(image.name)))
for person in results:
if len(person.candidates) > 0:
print('Person for face ID {} is identified in {} with a confidence of {}.'.format(person.face_id, os.path.basename(image.name), person.candidates[0].confidence)) # Get topmost confidence score
else:
print('No person identified for face ID {} in {}.'.format(person.face_id, os.path.basename(image.name)))
print()
'''
END - Create/Train/Detect/Identify Person Group
'''
print()
print('-----------------------------')
print('End of quickstart.')
在相应的字段中输入订阅密钥和终结点。
重要
转到 Azure 门户。 如果你在“先决条件”部分创建的人脸资源部署成功,请单击“后续步骤”下的“转到资源”按钮 。 在资源的“密钥和终结点”页的“资源管理”下可以找到密钥和终结点 。
重要
完成后,请记住将密钥从代码中删除,并且永远不要公开发布该密钥。 对于生产环境,请考虑使用安全的方法来存储和访问凭据。 有关详细信息,请参阅认知服务安全性文章。
使用
python
命令从应用程序目录运行人脸识别应用。python quickstart-file.py
提示
人脸 API 在一组预构建的模型呢上运行,这些模型在本质上是静态的(模型的性能不会因为运行服务而提高或降低)。 如果 Microsoft 更新模型的后端,但不迁移整个新模型版本,那么模型生成的结果可能会变化。 若要使用更新的模型版本,可重新训练 PersonGroup,将更新的模型指定为具有相同注册映像的参数。
清理资源
如果想要清理并删除认知服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。
若要删除在本快速入门中创建的 PersonGroup,请在脚本中运行以下代码:
# Delete the main person group.
face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
print("Deleted the person group {} from the source location.".format(PERSON_GROUP_ID))
print()
后续步骤
在本快速入门中,你已了解如何使用适用于 Python 的人脸客户端库来执行基本人脸识别。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。
开始使用人脸 REST API 进行人脸识别。 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。
注意
此快速入门使用 cURL 命令来调用 REST API。 也可以使用编程语言调用 REST API。 使用语言 SDK 可以更容易实现人脸识别等复杂方案。 请参阅 GitHub 示例,查看 C#、Python、Java、JavaScript 和 Go 的相关示例。
先决条件
- Azure 订阅 - 创建试用订阅
- 你的 Azure 帐户必须分配有认知服务参与者角色,你才能同意负责的 AI 条款并创建资源。 若要将此角色分配给你的帐户,请按照分配角色文档中的步骤进行操作,或与管理员联系。
- 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,单击“转到资源”。
- 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。 你稍后会在快速入门中将密钥和终结点粘贴到下方的代码中。
- 可以使用免费定价层 (
F0
) 试用该服务,然后再升级到付费层进行生产。
- PowerShell 6.0 及以上版本,或类似的命令行应用程序。
标识人脸
- 首先,在源人脸上调用检测 API。 这是我们试图从更大的群体中识别的人脸。 将以下命令复制到文本编辑器,插入自己的订阅密钥,然后将其复制到 shell 窗口中并运行它。
curl -v -X POST "https://api.cognitive.azure.cn/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes={string}&recognitionModel=recognition_04&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{\"url\":\"https://csdx.blob.core.chinacloudapi.cn/resources/Face/Images/identification1.jpg\"}"
Save the returned face ID string to a temporary location. You'll use it again at the end.
- 接下来,需要创建 LargePersonGroup。 此对象将存储多人的聚合人脸数据。 运行以下命令,插入自己的订阅密钥。 或者,在请求正文中更改组的名称和元数据。
curl -v -X PUT "https://api.cognitive.azure.cn/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
\"name\": \"large-person-group-name\",
\"userData\": \"User-provided data attached to the large person group.\",
\"recognitionModel\": \"recognition_03\"
}"
Save the returned ID of the created group to a temporary location.
- 接下来,你将创建属于该组的人员对象。 运行以下命令,插入自己的订阅密钥和上一步中的 LargePersonGroup 的 ID。 此命令创建名为 Family1-Dad 的人员。
curl -v -X POST "https://api.cognitive.azure.cn/face/v1.0/largepersongroups/{largePersonGroupId}/persons" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
\"name\": \"Family1-Dad\",
\"userData\": \"User-provided data attached to the person.\"
}"
After you run this command, run it again with different input data to create more **Person** objects: "Family1-Mom", "Family1-Son", "Family1-Daughter", "Family2-Lady", and "Family2-Man".
Save the IDs of each **Person** created; it's important to keep track of which person name has which ID.
- 接下来,需要检测新人脸并将其与已有的人员对象相关联。 以下命令从图像 Family1-Dad.jpg 检测人脸,并将其添加到相应的人员。 需要将
personId
制定为创建 Family1-Dad 人员对象时返回的 ID。 图像名称对应于所创建人员的名称。 另请在相应的字段中输入 LargePersonGroup ID 和订阅密钥。
curl -v -X POST "https://api.cognitive.azure.cn/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{\"url\":\"https://csdx.blob.core.chinacloudapi.cn/resources/Face/Images/Family1-Dad.jpg\"}"
Then, run the above command again with a different source image and target **Person**. The images available are: *Family1-Dad1.jpg*, *Family1-Dad2.jpg* *Family1-Mom1.jpg*, *Family1-Mom2.jpg*, *Family1-Son1.jpg*, *Family1-Son2.jpg*, *Family1-Daughter1.jpg*, *Family1-Daughter2.jpg*, *Family2-Lady1.jpg*, *Family2-Lady2.jpg*, *Family2-Man1.jpg*, and *Family2-Man2.jpg*. Be sure that the **Person** whose ID you specify in the API call matches the name of the image file in the request body.
At the end of this step, you should have multiple **Person** objects that each have one or more corresponding faces, detected directly from the provided images.
- 接下来,使用当前人脸数据训练 LargePersonGroup 。 训练操作教模型如何将面部特征(有时从多个源图像聚合而来)与每个人相关联。 在运行命令之前插入 LargePersonGroup ID 和订阅密钥。
curl -v -X POST "https://api.cognitive.azure.cn/face/v1.0/largepersongroups/{largePersonGroupId}/train" -H "Ocp-Apim-Subscription-Key: {subscription key}"
- 现在,你已准备好使用第一步中的源人脸 ID 和 LargePersonGroup ID 调用识别 API。 将这些值插入请求正文中的相应字段,并插入订阅密钥。
curl -v -X POST "https://api.cognitive.azure.cn/face/v1.0/identify" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
\"largePersonGroupId\": \"INSERT_PERSONGROUP_NAME\",
\"faceIds\": [
\"INSERT_SOURCE_FACE_ID\"
],
\"maxNumOfCandidatesReturned\": 1,
\"confidenceThreshold\": 0.5
}"
The response should give you a **Person** ID indicating the person identified with the source face. It should be the ID that corresponds to the "Family1-Dad" person, because the source face is of that person.
清理资源
若要删除在本练习中创建的 LargePersonGrou,请运行 LargePersonGroup - Delete 调用。
curl -v -X DELETE "https://api.cognitive.azure.cn/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"
如果想要清理并删除认知服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。
后续步骤
在本快速入门中,你已了解如何使用人脸 REST API 来执行基本人脸识别任务。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。