快速入门:使用人脸客户端库Quickstart: Use the Face client library

适用于 .NET 的人脸客户端库入门。Get started with the Face client library for .NET. 请按照以下步骤安装程序包并试用基本任务的示例代码。Follow these steps to install the package and try out the example code for basic tasks. 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

使用适用于 .NET 的人脸客户端库可以:Use the Face client library for .NET to:

参考文档 | 库源代码 | 包 (NuGet) | 示例Reference documentation | Library source code | Package (NuGet) | Samples

先决条件Prerequisites

设置Setting up

创建人脸 Azure 资源Create a Face Azure resource

Azure 认知服务由你订阅的 Azure 资源表示。Azure Cognitive Services are represented by Azure resources that you subscribe to. 在本地计算机上使用 Azure 门户Azure CLI 创建人脸资源。Create a resource for Face using the Azure portal or Azure CLI on your local machine.

获取试用订阅或资源的密钥后,请为该密钥和终结点 URL 创建环境变量,分别名为 FACE_SUBSCRIPTION_KEYFACE_ENDPOINTAfter you get a key from your trial subscription or resource, create environment variables for the key and endpoint URL, named FACE_SUBSCRIPTION_KEY and FACE_ENDPOINT, respectively.

新建 C# 应用程序Create a new C# application

在首选编辑器或 IDE 中创建新的 .NET Core 应用程序。Create a new .NET Core application in your preferred editor or IDE.

在控制台窗口(例如 cmd、PowerShell 或 Bash)中,使用 dotnet new 命令创建名为 face-quickstart 的新控制台应用。In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new console app with the name face-quickstart. 此命令将创建包含单个源文件的简单“Hello World”C# 项目:Program.csThis command creates a simple "Hello World" C# project with a single source file: Program.cs.

dotnet new console -n face-quickstart

将目录更改为新创建的应用文件夹。Change your directory to the newly created app folder. 可使用以下代码生成应用程序:You can build the application with:

dotnet build

生成输出不应包含警告或错误。The build output should contain no warnings or errors.

...
Build succeeded.
 0 Warning(s)
 0 Error(s)
...

在首选的编辑器或 IDE 中,从项目目录打开 Program.cs 文件。From the project directory, open the Program.cs file in your preferred editor or IDE. 然后,添加以下 using 指令:Add the following using directives:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;

在应用程序的 Main 方法中,为该资源的 Azure 终结点和密钥创建变量。In the application's Main method, create variables for your resource's Azure endpoint and key.

// From your Face subscription in the Azure portal, get your subscription key and endpoint.
// Set your environment variables using the names below. Close and reopen your project for changes to take effect.
string SUBSCRIPTION_KEY = Environment.GetEnvironmentVariable("FACE_SUBSCRIPTION_KEY");
string ENDPOINT = Environment.GetEnvironmentVariable("FACE_ENDPOINT");

安装客户端库Install the client library

在应用程序目录中,使用以下命令安装适用于 .NET 的人脸客户端库:Within the application directory, install the Face client library for .NET with the following command:

dotnet add package Microsoft.Azure.CognitiveServices.Vision.Face --version 2.5.0-preview.1

如果你使用的是 Visual Studio IDE,客户端库可用作可下载的 NuGet 包。If you're using the Visual Studio IDE, the client library is available as a downloadable NuGet package.

对象模型Object model

以下类和接口将处理人脸 .NET 客户端库的某些主要功能:The following classes and interfaces handle some of the major features of the Face .NET client library:

名称Name 说明Description
FaceClientFaceClient 此类代表使用人脸服务的授权,使用所有人脸功能时都需要用到它。This class represents your authorization to use the Face service, and you need it for all Face functionality. 请使用你的订阅信息实例化此类,然后使用它来生成其他类的实例。You instantiate it with your subscription information, and you use it to produce instances of other classes.
FaceOperationsFaceOperations 此类处理可对人脸执行的基本检测和识别任务。This class handles the basic detection and recognition tasks that you can do with human faces.
DetectedFaceDetectedFace 此类代表已从图像中的单个人脸检测到的所有数据。This class represents all of the data that was detected from a single face in an image. 可以使用它来检索有关人脸的详细信息。You can use it to retrieve detailed information about the face.
FaceListOperationsFaceListOperations 此类管理云中存储的 FaceList 构造,这些构造存储各种不同的人脸。This class manages the cloud-stored FaceList constructs, which store an assorted set of faces.
PersonGroupPersonExtensionsPersonGroupPersonExtensions 此类管理云中存储的 Person 构造,这些构造存储属于单个人员的一组人脸。This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person.
PersonGroupOperationsPersonGroupOperations 此类管理云中存储的 PersonGroup 构造,这些构造存储各种不同的 Person 对象。This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects.
ShapshotOperationsShapshotOperations 此类管理快照功能。This class manages the Snapshot functionality. 可以使用它来暂时保存所有基于云的人脸数据,并将这些数据迁移到新的 Azure 订阅。You can use it to temporarily save all of your cloud-based Face data and migrate that data to a new Azure subscription.

代码示例Code examples

以下代码片段演示如何使用适用于 .NET 的人脸客户端库执行以下任务:The code snippets below show you how to do the following tasks with the Face client library for .NET:

验证客户端Authenticate the client

备注

本快速入门假设已经为人脸密钥和终结点(名为 FACE_SUBSCRIPTION_KEYFACE_ENDPOINT创建了环境变量This quickstart assumes you've created environment variables for your Face key and endpoint, named FACE_SUBSCRIPTION_KEY and FACE_ENDPOINT.

在新方法中,使用终结点和密钥实例化客户端。In a new method, instantiate a client with your endpoint and key. 使用密钥创建一个 ApiKeyServiceClientCredentials 对象,并在终结点中使用该对象创建一个 FaceClient 对象。Create a ApiKeyServiceClientCredentials object with your key, and use it with your endpoint to create a FaceClient object.

/*
 *  AUTHENTICATE
 *  Uses subscription key and region to create a client.
 */
public static IFaceClient Authenticate(string endpoint, string key)
{
    return new FaceClient(new ApiKeyServiceClientCredentials(key)) { Endpoint = endpoint };
}

你可能想要在 Main 方法中调用此方法。You'll likely want to call this method in the Main method.

// Authenticate.
IFaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);

声明帮助程序字段Declare helper fields

你稍后将添加的几个人脸操作需要以下字段。The following fields are needed for several of the Face operations you'll add later. 在类的根目录中定义以下 URL 字符串。At the root of your class, define the following URL string. 此 URL 指向示例图像的文件夹。This URL points to a folder of sample images.

// Used for all examples.
// URL for the images.
const string IMAGE_BASE_URL = "https://csdx.blob.core.windows.net/resources/Face/Images/";

定义字符串以指向不同的识别模型类型。Define strings to point to the different recognition model types. 稍后,你将能够指定要用于人脸检测的识别模型。Later on, you'll be able to specify which recognition model you want to use for face detection. 有关这些选项的信息,请参阅指定识别模型See Specify a recognition model for information on these options.

// Used in the Detect Faces and Verify examples.
// Recognition model 2 is used for feature extraction, use 1 to simply recognize/detect a face. 
// However, the API calls to Detection that are used with Verify, Find Similar, or Identify must share the same recognition model.
const string RECOGNITION_MODEL2 = RecognitionModel.Recognition02;
const string RECOGNITION_MODEL1 = RecognitionModel.Recognition01;

在图像中检测人脸Detect faces in an image

将以下方法调用添加到 main 方法。Add the following method call to your main method. 接下来将定义该方法。You'll define the method next. 最后的检测操作将使用 FaceClient 对象、图像 URL 和识别模型。The final Detect operation will take a FaceClient object, an image URL, and a recognition model.

// Detect - get features from faces.
DetectFaceExtract(client, IMAGE_BASE_URL, RECOGNITION_MODEL2).Wait();

获取检测到的人脸对象Get detected face objects

在下一个代码块中,DetectFaceExtract 方法将在给定 URL 处的三个图像中检测人脸,并在程序内存中创建 DetectedFace 对象的列表。In the next block of code, the DetectFaceExtract method detects faces in three of the images at the given URL and creates a list of DetectedFace objects in program memory. FaceAttributeType 值列表指定要提取的特征。The list of FaceAttributeType values specifies which features to extract.

/* 
 * DETECT FACES
 * Detects features from faces and IDs them.
 */
public static async Task DetectFaceExtract(IFaceClient client, string url, string recognitionModel)
{
    Console.WriteLine("========DETECT FACES========");
    Console.WriteLine();

    // Create a list of images
    List<string> imageFileNames = new List<string>
                    {
                        "detection1.jpg",    // single female with glasses
                        // "detection2.jpg", // (optional: single man)
                        // "detection3.jpg", // (optional: single male construction worker)
                        // "detection4.jpg", // (optional: 3 people at cafe, 1 is blurred)
                        "detection5.jpg",    // family, woman child man
                        "detection6.jpg"     // elderly couple, male female
                    };

    foreach (var imageFileName in imageFileNames)
    {
        IList<DetectedFace> detectedFaces;

        // Detect faces with all attributes from image url.
        detectedFaces = await client.Face.DetectWithUrlAsync($"{url}{imageFileName}",
                returnFaceAttributes: new List<FaceAttributeType> { FaceAttributeType.Accessories, FaceAttributeType.Age,
                FaceAttributeType.Blur, FaceAttributeType.Emotion, FaceAttributeType.Exposure, FaceAttributeType.FacialHair,
                FaceAttributeType.Gender, FaceAttributeType.Glasses, FaceAttributeType.Hair, FaceAttributeType.HeadPose,
                FaceAttributeType.Makeup, FaceAttributeType.Noise, FaceAttributeType.Occlusion, FaceAttributeType.Smile },
                recognitionModel: recognitionModel);

        Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{imageFileName}`.");

显示检测到的人脸数据Display detected face data

DetectFaceExtract 方法的其余部分将分析和打印每个检测到的人脸的属性数据。The rest of the DetectFaceExtract method parses and prints the attribute data for each detected face. 每个属性必须在原始人脸检测 API 调用中单独指定(在 FaceAttributeType 列表中)。Each attribute must be specified separately in the original face detection API call (in the FaceAttributeType list). 下面的代码处理每个属性,但你可能只需要使用一个或一些属性。The following code processes every attribute, but you will likely only need to use one or a few.

// Parse and print all attributes of each detected face.
        foreach (var face in detectedFaces)
        {
            Console.WriteLine($"Face attributes for {imageFileName}:");

            // Get bounding box of the faces
            Console.WriteLine($"Rectangle(Left/Top/Width/Height) : {face.FaceRectangle.Left} {face.FaceRectangle.Top} {face.FaceRectangle.Width} {face.FaceRectangle.Height}");

            // Get accessories of the faces
            List<Accessory> accessoriesList = (List<Accessory>)face.FaceAttributes.Accessories;
            int count = face.FaceAttributes.Accessories.Count;
            string accessory; string[] accessoryArray = new string[count];
            if (count == 0) { accessory = "NoAccessories"; }
            else
            {
                for (int i = 0; i < count; ++i) { accessoryArray[i] = accessoriesList[i].Type.ToString(); }
                accessory = string.Join(",", accessoryArray);
            }
            Console.WriteLine($"Accessories : {accessory}");

            // Get face other attributes
            Console.WriteLine($"Age : {face.FaceAttributes.Age}");
            Console.WriteLine($"Blur : {face.FaceAttributes.Blur.BlurLevel}");

            // Get emotion on the face
            string emotionType = string.Empty;
            double emotionValue = 0.0;
            Emotion emotion = face.FaceAttributes.Emotion;
            if (emotion.Anger > emotionValue) { emotionValue = emotion.Anger; emotionType = "Anger"; }
            if (emotion.Contempt > emotionValue) { emotionValue = emotion.Contempt; emotionType = "Contempt"; }
            if (emotion.Disgust > emotionValue) { emotionValue = emotion.Disgust; emotionType = "Disgust"; }
            if (emotion.Fear > emotionValue) { emotionValue = emotion.Fear; emotionType = "Fear"; }
            if (emotion.Happiness > emotionValue) { emotionValue = emotion.Happiness; emotionType = "Happiness"; }
            if (emotion.Neutral > emotionValue) { emotionValue = emotion.Neutral; emotionType = "Neutral"; }
            if (emotion.Sadness > emotionValue) { emotionValue = emotion.Sadness; emotionType = "Sadness"; }
            if (emotion.Surprise > emotionValue) { emotionType = "Surprise"; }
            Console.WriteLine($"Emotion : {emotionType}");

            // Get more face attributes
            Console.WriteLine($"Exposure : {face.FaceAttributes.Exposure.ExposureLevel}");
            Console.WriteLine($"FacialHair : {string.Format("{0}", face.FaceAttributes.FacialHair.Moustache + face.FaceAttributes.FacialHair.Beard + face.FaceAttributes.FacialHair.Sideburns > 0 ? "Yes" : "No")}");
            Console.WriteLine($"Gender : {face.FaceAttributes.Gender}");
            Console.WriteLine($"Glasses : {face.FaceAttributes.Glasses}");

            // Get hair color
            Hair hair = face.FaceAttributes.Hair;
            string color = null;
            if (hair.HairColor.Count == 0) { if (hair.Invisible) { color = "Invisible"; } else { color = "Bald"; } }
            HairColorType returnColor = HairColorType.Unknown;
            double maxConfidence = 0.0f;
            foreach (HairColor hairColor in hair.HairColor)
            {
                if (hairColor.Confidence <= maxConfidence) { continue; }
                maxConfidence = hairColor.Confidence; returnColor = hairColor.Color; color = returnColor.ToString();
            }
            Console.WriteLine($"Hair : {color}");

            // Get more attributes
            Console.WriteLine($"HeadPose : {string.Format("Pitch: {0}, Roll: {1}, Yaw: {2}", Math.Round(face.FaceAttributes.HeadPose.Pitch, 2), Math.Round(face.FaceAttributes.HeadPose.Roll, 2), Math.Round(face.FaceAttributes.HeadPose.Yaw, 2))}");
            Console.WriteLine($"Makeup : {string.Format("{0}", (face.FaceAttributes.Makeup.EyeMakeup || face.FaceAttributes.Makeup.LipMakeup) ? "Yes" : "No")}");
            Console.WriteLine($"Noise : {face.FaceAttributes.Noise.NoiseLevel}");
            Console.WriteLine($"Occlusion : {string.Format("EyeOccluded: {0}", face.FaceAttributes.Occlusion.EyeOccluded ? "Yes" : "No")} " +
                $" {string.Format("ForeheadOccluded: {0}", face.FaceAttributes.Occlusion.ForeheadOccluded ? "Yes" : "No")}   {string.Format("MouthOccluded: {0}", face.FaceAttributes.Occlusion.MouthOccluded ? "Yes" : "No")}");
            Console.WriteLine($"Smile : {face.FaceAttributes.Smile}");
            Console.WriteLine();
        }
    }
}

查找相似人脸Find similar faces

以下代码采用检测到的单个人脸(源),并搜索其他一组人脸(目标),以找到匹配项。The following code takes a single detected face (source) and searches a set of other faces (target) to find matches. 找到匹配项后,它会将匹配的人脸的 ID 输出到控制台。When it finds a match, it prints the ID of the matched face to the console.

检测人脸以进行比较Detect faces for comparison

首先定义另一个人脸检测方法。First, define a second face detection method. 需要先检测图像中的人脸,然后才能对其进行比较;此检测方法已针对比较操作进行优化。You need to detect faces in images before you can compare them, and this detection method is optimized for comparison operations. 它不会提取以上部分所示的详细人脸属性,而是使用另一个识别模型。It doesn't extract detailed face attributes like in the section above, and it uses a different recognition model.

private static async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string RECOGNITION_MODEL1)
{
    // Detect faces from image URL. Since only recognizing, use the recognition model 1.
    IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: RECOGNITION_MODEL1);
    Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{Path.GetFileName(url)}`");
    return detectedFaces.ToList();
}

查找匹配项Find matches

以下方法检测一组目标图像和单个源图像中的人脸。The following method detects faces in a set of target images and in a single source image. 然后,它将比较这些人脸,并查找与源图像类似的所有目标图像。Then, it compares them and finds all the target images that are similar to the source image.

/*
 * FIND SIMILAR
 * This example will take an image and find a similar one to it in another image.
 */
public static async Task FindSimilar(IFaceClient client, string url, string RECOGNITION_MODEL1)
{
    Console.WriteLine("========FIND SIMILAR========");
    Console.WriteLine();

    List<string> targetImageFileNames = new List<string>
                        {
                            "Family1-Dad1.jpg",
                            "Family1-Daughter1.jpg",
                            "Family1-Mom1.jpg",
                            "Family1-Son1.jpg",
                            "Family2-Lady1.jpg",
                            "Family2-Man1.jpg",
                            "Family3-Lady1.jpg",
                            "Family3-Man1.jpg"
                        };

    string sourceImageFileName = "findsimilar.jpg";
    IList<Guid?> targetFaceIds = new List<Guid?>();
    foreach (var targetImageFileName in targetImageFileNames)
    {
        // Detect faces from target image url.
        var faces = await DetectFaceRecognize(client, $"{url}{targetImageFileName}", RECOGNITION_MODEL1);
        // Add detected faceId to list of GUIDs.
        targetFaceIds.Add(faces[0].FaceId.Value);
    }

    // Detect faces from source image url.
    IList<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", RECOGNITION_MODEL1);
    Console.WriteLine();

    // Find a similar face(s) in the list of IDs. Comapring only the first in list for testing purposes.
    IList<SimilarFace> similarResults = await client.Face.FindSimilarAsync(detectedFaces[0].FaceId.Value, null, null, targetFaceIds);

以下代码将匹配详细信息输出到控制台:The following code prints the match details to the console:

foreach (var similarResult in similarResults)
{
    Console.WriteLine($"Faces from {sourceImageFileName} & ID:{similarResult.FaceId} are similar with confidence: {similarResult.Confidence}.");
}
Console.WriteLine();

识别人脸Identify a face

识别操作采用一个或多个人员的图像,并在图像中查找每个人脸的标识。The Identify operation takes an image of a person (or multiple people) and looks to find the identity of each face in the image. 它将每个检测到的人脸与某个 PersonGroup(面部特征已知的不同 Person 对象的数据库)进行比较。It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known. 为了执行“识别”操作,你首先需要创建并训练 PersonGroupIn order to do the Identify operation, you first need to create and train a PersonGroup

创建和训练人员组Create and train a person group

以下代码创建包含六个不同 Person 对象的 PersonGroupThe following code creates a PersonGroup with six different Person objects. 它将每个 Person 与一组示例图像相关联,然后进行训练以按面部特征识别每个人。It associates each Person with a set of example images, and then it trains to recognize each person by their facial characteristics. PersonPersonGroup 对象在验证、识别和分组操作中使用。Person and PersonGroup objects are used in the Verify, Identify, and Group operations.

创建 PersonGroupCreate PersonGroup

在类的根目录中声明一个字符串变量,用于表示要创建的 PersonGroup 的 ID。Declare a string variable at the root of your class to represent the ID of the PersonGroup you'll create.

static string sourcePersonGroup = null;

在新方法中添加以下代码。In a new method, add the following code. 此方法将执行“识别”操作。This method will carry out the Identify operation. 第一个代码块将人员的姓名与其示例图像相关联。The first block of code associates the names of persons with their example images.

// Create a dictionary for all your images, grouping similar ones under the same key.
Dictionary<string, string[]> personDictionary =
    new Dictionary<string, string[]>
        { { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
          { "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
          { "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } },
          { "Family1-Daughter", new[] { "Family1-Daughter1.jpg", "Family1-Daughter2.jpg" } },
          { "Family2-Lady", new[] { "Family2-Lady1.jpg", "Family2-Lady2.jpg" } },
          { "Family2-Man", new[] { "Family2-Man1.jpg", "Family2-Man2.jpg" } }
        };
// A group photo that includes some of the persons you seek to identify from your dictionary.
string sourceImageFileName = "identification1.jpg";

接下来添加以下代码,以便为字典中的每个人员创建一个 Person 对象,并从相应的图像添加人脸数据。Next, add the following code to create a Person object for each person in the Dictionary and add the face data from the appropriate images. 每个 Person 对象通过其唯一 ID 字符串来与同一个 PersonGroup 相关联。Each Person object is associated with the same PersonGroup through its unique ID string. 请记得将变量 clienturlRECOGNITION_MODEL1 传入此方法。Remember to pass the variables client, url, and RECOGNITION_MODEL1 into this method.

// Create a person group. 
string personGroupId = Guid.NewGuid().ToString();
sourcePersonGroup = personGroupId; // This is solely for the snapshot operations example
Console.WriteLine($"Create a person group ({personGroupId}).");
await client.PersonGroup.CreateAsync(personGroupId, personGroupId, recognitionModel: recognitionModel);
// The similar faces will be grouped into a single person group person.
foreach (var groupedFace in personDictionary.Keys)
{
    // Limit TPS
    await Task.Delay(250);
    Person person = await client.PersonGroupPerson.CreateAsync(personGroupId: personGroupId, name: groupedFace);
    Console.WriteLine($"Create a person group person '{groupedFace}'.");

    // Add face to the person group person.
    foreach (var similarImage in personDictionary[groupedFace])
    {
        Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
        PersistedFace face = await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, person.PersonId,
            $"{url}{similarImage}", similarImage);
    }
}

训练 PersonGroupTrain PersonGroup

从图像中提取人脸数据并将其分类成不同的 Person 对象后,必须训练 PersonGroup 才能识别与其每个 Person 对象关联的视觉特征。Once you've extracted face data from your images and sorted it into different Person objects, you must train the PersonGroup to identify the visual features associated with each of its Person objects. 以下代码调用异步 train 方法并轮询结果,然后将状态输出到控制台。The following code calls the asynchronous train method and polls the results, printing the status to the console.

// Start to train the person group.
Console.WriteLine();
Console.WriteLine($"Train person group {personGroupId}.");
await client.PersonGroup.TrainAsync(personGroupId);

// Wait until the training is completed.
while (true)
{
    await Task.Delay(1000);
    var trainingStatus = await client.PersonGroup.GetTrainingStatusAsync(personGroupId);
    Console.WriteLine($"Training status: {trainingStatus.Status}.");
    if (trainingStatus.Status == TrainingStatusType.Succeeded) { break; }
}

现已准备好在验证、识别或分组操作中使用此 Person 组及其关联的 Person 对象。This Person group and its associated Person objects are now ready to be used in the Verify, Identify, or Group operations.

获取测试图像Get a test image

请注意,创建和训练人员组中的代码定义了一个 sourceImageFileName 变量。Notice that the code for Create and train a person group defines a variable sourceImageFileName. 此变量对应于源图像 — 包含要识别的人员的图像。This variable corresponds to the source image—the image that contains people to identify.

标识人脸Identify faces

以下代码采用源映像,创建在图像中检测到的所有人脸的列表。The following code takes the source image and creates a list of all the faces detected in the image. 将会根据 PersonGroup 识别这些人脸。These are the faces that will be identified against the PersonGroup.

List<Guid> sourceFaceIds = new List<Guid>();
// Detect faces from source image url.
List<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel: recognitionModel);

// Add detected faceId to sourceFaceIds.
foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }

下一代码片段将调用 IdentifyAsync 操作,并将结果输出到控制台。The next code snippet calls the IdentifyAsync operation and prints the results to the console. 此处,服务会尝试将源图像中的每个人脸与给定 PersonGroup 中的某个 Person 进行匹配。Here, the service attempts to match each face from the source image to a Person in the given PersonGroup. Identify 方法就此结束。This closes out your Identify method.

// Identify the faces in a person group. 
var identifyResults = await client.Face.IdentifyAsync(sourceFaceIds, personGroupId);

foreach (var identifyResult in identifyResults)
{
    Person person = await client.PersonGroupPerson.GetAsync(personGroupId, identifyResult.Candidates[0].PersonId);
    Console.WriteLine($"Person '{person.Name}' is identified for face in: {sourceImageFileName} - {identifyResult.FaceId}," +
        $" confidence: {identifyResult.Candidates[0].Confidence}.");
}
Console.WriteLine();

创建用于数据迁移的快照Take a snapshot for data migration

利用快照功能,可将已保存的人脸数据(例如训练的 PersonGroup)移到不同的 Azure 认知服务人脸订阅。The Snapshots feature lets you move your saved Face data, such as a trained PersonGroup, to a different Azure Cognitive Services Face subscription. 例如,如果你使用免费试用订阅创建了一个 PersonGroup 对象,现在想要将其迁移到付费订阅,则可以使用此功能。You may want to use this feature if, for example, you've created a PersonGroup object using a free trial subscription and want to migrate it to a paid subscription. 有关快照功能的概述,请参阅迁移人脸数据See Migrate your face data for an overview of the Snapshots feature.

此示例将迁移你在创建和训练人员组中创建的 PersonGroupIn this example, you will migrate the PersonGroup you created in Create and train a person group. 可以先完成该部分,或者创建你自己的要迁移的人脸数据构造。You can either complete that section first, or create your own Face data construct(s) to migrate.

设置目标订阅Set up target subscription

首先,必须有另一个已包含人脸资源的 Azure 订阅;可以遵循设置部分中的步骤做好此准备。First, you must have a second Azure subscription with a Face resource; you can do this by following the steps in the Setting up section.

然后,在程序的 Main 方法中定义以下变量。Then, define the following variables in the Main method of your program. 需要为 Azure 帐户的订阅 ID 以及新(目标)帐户的密钥、终结点和订阅 ID 创建新的环境变量。You'll need to create new environment variables for the subscription ID of your Azure account, as well as the key, endpoint, and subscription ID of your new (target) account.

// The Snapshot example needs its own 2nd client, since it uses two different regions.
string TARGET_SUBSCRIPTION_KEY = Environment.GetEnvironmentVariable("FACE_SUBSCRIPTION_KEY2");
string TARGET_ENDPOINT = Environment.GetEnvironmentVariable("FACE_ENDPOINT2");
// Grab your subscription ID, from any resource in Azure, from the Overview page (all resources have the same subscription ID). 
Guid AZURE_SUBSCRIPTION_ID = new Guid(Environment.GetEnvironmentVariable("AZURE_SUBSCRIPTION_ID"));
// Target subscription ID. It will be the same as the source ID if created Face resources from the same
// subscription (but moving from region to region). If they are different subscriptions, add the other
// target ID here.
Guid TARGET_AZURE_SUBSCRIPTION_ID = new Guid(Environment.GetEnvironmentVariable("AZURE_SUBSCRIPTION_ID"));

对于此示例,请为目标 PersonGroup(属于新订阅的对象,可将数据复制到其中)的 ID 声明一个变量。For this example, declare a variable for the ID of the target PersonGroup—the object that belongs to the new subscription, which you will copy your data to.

// The Snapshot example needs its own 2nd client, since it uses two different regions.
string TARGET_SUBSCRIPTION_KEY = Environment.GetEnvironmentVariable("FACE_SUBSCRIPTION_KEY2");
string TARGET_ENDPOINT = Environment.GetEnvironmentVariable("FACE_ENDPOINT2");
// Grab your subscription ID, from any resource in Azure, from the Overview page (all resources have the same subscription ID). 
Guid AZURE_SUBSCRIPTION_ID = new Guid(Environment.GetEnvironmentVariable("AZURE_SUBSCRIPTION_ID"));
// Target subscription ID. It will be the same as the source ID if created Face resources from the same
// subscription (but moving from region to region). If they are different subscriptions, add the other
// target ID here.
Guid TARGET_AZURE_SUBSCRIPTION_ID = new Guid(Environment.GetEnvironmentVariable("AZURE_SUBSCRIPTION_ID"));

对目标客户端进行身份验证Authenticate target client

接下来,添加代码以便辅助人脸订阅进行身份验证。Next, add the code to authenticate your secondary Face subscription.

// Authenticate for another region or subscription (used in Snapshot only).
IFaceClient clientTarget = Authenticate(TARGET_ENDPOINT, TARGET_SUBSCRIPTION_KEY);

使用快照Use a snapshot

剩余的快照操作必须将在异步方法中进行。The rest of the snapshot operations must take place within an asynchronous method.

  1. 第一步是创建快照,以将原始订阅的人脸数据保存到临时云位置。The first step is to take the snapshot, which saves your original subscription's face data to a temporary cloud location. 此方法返回用于查询操作状态的 ID。This method returns an ID that you use to query the status of the operation.

        /*
     * SNAPSHOT OPERATIONS
     * Copies a person group from one Azure region (or subscription) to another. For example: from the ChinaEast2 region to the WestUS.
     * The same process can be used for face lists. 
     * NOTE: the person group in the target region has a new person group ID, so it no longer associates with the source person group.
     */
    public static async Task Snapshot(IFaceClient clientSource, IFaceClient clientTarget, string personGroupId, Guid azureId, Guid targetAzureId)
    {
        Console.WriteLine("========SNAPSHOT OPERATIONS========");
        Console.WriteLine();
    
        // Take a snapshot for the person group that was previously created in your source region.
        var takeSnapshotResult = await clientSource.Snapshot.TakeAsync(SnapshotObjectType.PersonGroup, personGroupId, new[] { azureId }); // add targetAzureId to this array if your target ID is different from your source ID.
    
        // Get operation id from response for tracking the progress of snapshot taking.
        var operationId = Guid.Parse(takeSnapshotResult.OperationLocation.Split('/')[2]);
        Console.WriteLine($"Taking snapshot(operation ID: {operationId})... Started");
    
  2. 接下来,不断查询该 ID,直到操作完成。Next, query the ID until the operation has completed.

        // Wait for taking the snapshot to complete.
    OperationStatus operationStatus = null;
    do
    {
        Thread.Sleep(TimeSpan.FromMilliseconds(1000));
        // Get the status of the operation.
        operationStatus = await clientSource.Snapshot.GetOperationStatusAsync(operationId);
        Console.WriteLine($"Operation Status: {operationStatus.Status}");
    }
    while (operationStatus.Status != OperationStatusType.Succeeded && operationStatus.Status != OperationStatusType.Failed);
    // Confirm the location of the resource where the snapshot is taken and its snapshot ID
    var snapshotId = Guid.Parse(operationStatus.ResourceLocation.Split('/')[2]);
    Console.WriteLine($"Source region snapshot ID: {snapshotId}");
    Console.WriteLine($"Taking snapshot of person group: {personGroupId}... Done\n");
    
  3. 然后使用 apply 操作将人脸数据写入目标订阅。Then use the apply operation to write your face data to your target subscription. 此方法也会返回一个 ID 值。This method also returns an ID value.

        // Apply the snapshot in target region, with a new ID.
    var newPersonGroupId = Guid.NewGuid().ToString();
    targetPersonGroup = newPersonGroupId;
    
    try
    {
        var applySnapshotResult = await clientTarget.Snapshot.ApplyAsync(snapshotId, newPersonGroupId);
    
        // Get operation id from response for tracking the progress of snapshot applying.
        var applyOperationId = Guid.Parse(applySnapshotResult.OperationLocation.Split('/')[2]);
        Console.WriteLine($"Applying snapshot(operation ID: {applyOperationId})... Started");
    
  4. 同样,请不断查询新 ID,直到该操作完成。Again, query the new ID until the operation has completed.

        // Apply the snapshot in target region, with a new ID.
    var newPersonGroupId = Guid.NewGuid().ToString();
    targetPersonGroup = newPersonGroupId;
    
    try
    {
        var applySnapshotResult = await clientTarget.Snapshot.ApplyAsync(snapshotId, newPersonGroupId);
    
        // Get operation id from response for tracking the progress of snapshot applying.
        var applyOperationId = Guid.Parse(applySnapshotResult.OperationLocation.Split('/')[2]);
        Console.WriteLine($"Applying snapshot(operation ID: {applyOperationId})... Started");
    
  5. 最后,完成 try/catch 块并完成该方法。Finally, complete the try/catch block and finish the method.

        catch (Exception e)
        {
            throw new ApplicationException("Do you have a second Face resource in Azure? " +
                "It's needed to transfer the person group to it for the Snapshot example.", e);
        }
    }
    

此时,新的 PersonGroup 对象应包含与原始对象相同的数据,并且应该可以从新的(目标)Azure 人脸订阅访问该对象。At this point, your new PersonGroup object should have the same data as the original one and should be accessible from your new (target) Azure Face subscription.

运行应用程序Run the application

从应用程序目录使用 dotnet run 命令运行应用程序。Run the application from your application directory with the dotnet run command.

dotnet run

清理资源Clean up resources

如果想要清理并删除认知服务订阅,可以删除资源或资源组。If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. 删除资源组同时也会删除与之相关联的任何其他资源。Deleting the resource group also deletes any other resources associated with it.

如果你在本快速入门中创建了 PersonGroup 并想要删除它,请在程序中运行以下代码:If you created a PersonGroup in this quickstart and you want to delete it, run the following code in your program:

// At end, delete person groups in both regions (since testing only)
Console.WriteLine("========DELETE PERSON GROUP========");
Console.WriteLine();
DeletePersonGroup(client, sourcePersonGroup).Wait();

使用以下代码定义删除方法:Define the deletion method with the following code:

/*
 * DELETE PERSON GROUP
 * After this entire example is executed, delete the person group in your Azure account,
 * otherwise you cannot recreate one with the same name (if running example repeatedly).
 */
public static async Task DeletePersonGroup(IFaceClient client, String personGroupId)
{
    await client.PersonGroup.DeleteAsync(personGroupId);
    Console.WriteLine($"Deleted the person group {personGroupId}.");
}

此外,如果你在本快速入门中使用快照功能迁移了数据,则还需要删除已保存到目标订阅的 PersonGroupAdditionally, if you migrated data using the Snapshot feature in this quickstart, you'll also need to delete the PersonGroup saved to the target subscription.

DeletePersonGroup(clientTarget, targetPersonGroup).Wait();
Console.WriteLine();

后续步骤Next steps

在本快速入门中,你已了解如何使用适用于 .NET 的人脸库来执行基本任务。In this quickstart, you learned how to use the Face library for .NET to do basis tasks. 接下来,请在参考文档中详细了解该库。Next, explore the reference documentation to learn more about the library.

适用于 Python 的人脸客户端库入门。Get started with the Face client library for Python. 请按照以下步骤安装程序包并试用基本任务的示例代码。Follow these steps to install the package and try out the example code for basic tasks. 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

使用适用于 Python 的人脸客户端库可以:Use the Face client library for Python to:

  • 在图像中检测人脸Detect faces in an image
  • 查找相似人脸Find similar faces
  • 创建和训练人员组Create and train a person group
  • 识别人脸Identify a face
  • 验证人脸Verify faces
  • 创建用于数据迁移的快照Take a snapshot for data migration

参考文档 | 库源代码 | Reference documentation | Library source code | 示例Samples

先决条件Prerequisites

设置Setting up

创建人脸 Azure 资源Create a Face Azure resource

Azure 认知服务由你订阅的 Azure 资源表示。Azure Cognitive Services are represented by Azure resources that you subscribe to. 在本地计算机上使用 Azure 门户Azure CLI 创建人脸资源。Create a resource for Face using the Azure portal or Azure CLI on your local machine.

获取试用订阅或资源的密钥后,请为该密钥和终结点创建环境变量,分别名为 FACE_SUBSCRIPTION_KEYFACE_ENDPOINTAfter you get a key from your trial subscription or resource, create environment variables for the key and endpoint, named FACE_SUBSCRIPTION_KEY and FACE_ENDPOINT, respectively.

创建新的 Python 应用程序Create a new Python application

创建新的 Python 脚本 — 例如 quickstart-file.pyCreate a new Python script—quickstart-file.py, for example. 在喜好的编辑器或 IDE 中打开该文件,并导入以下库。Then open it in your preferred editor or IDE and import the following libraries.

import asyncio
import io
import glob
import os
import sys
import time
import uuid
import requests
from urllib.parse import urlparse
from io import BytesIO
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, SnapshotObjectType, OperationStatusType

然后,为该资源的 Azure 终结点和密钥创建变量。Then, create variables for your resource's Azure endpoint and key.

# Set the FACE_SUBSCRIPTION_KEY environment variable with your key as the value.
# This key will serve all examples in this document.
KEY = os.environ['FACE_SUBSCRIPTION_KEY']

# Set the FACE_ENDPOINT environment variable with the endpoint from your Face service in Azure.
# This endpoint will be used in all examples in this quickstart.
ENDPOINT = os.environ['FACE_ENDPOINT']

备注

如果在启动应用程序后创建了环境变量,则需要关闭再重新打开运行该应用程序的编辑器、IDE 或 shell 才能访问该变量。If you created the environment variable after you launched the application, you will need to close and reopen the editor, IDE, or shell running it to access the variable.

安装客户端库Install the client library

可使用以下方式安装客户端库:You can install the client library with:

pip install --upgrade azure-cognitiveservices-vision-face

对象模型Object model

以下类和接口将处理人脸 Python 客户端库的某些主要功能。The following classes and interfaces handle some of the major features of the Face Python client library.

名称Name 说明Description
FaceClientFaceClient 此类代表使用人脸服务的授权,使用所有人脸功能时都需要用到它。This class represents your authorization to use the Face service, and you need it for all Face functionality. 请使用你的订阅信息实例化此类,然后使用它来生成其他类的实例。You instantiate it with your subscription information, and you use it to produce instances of other classes.
FaceOperationsFaceOperations 此类处理可对人脸执行的基本检测和识别任务。This class handles the basic detection and recognition tasks that you can do with human faces.
DetectedFaceDetectedFace 此类代表已从图像中的单个人脸检测到的所有数据。This class represents all of the data that was detected from a single face in an image. 可以使用它来检索有关人脸的详细信息。You can use it to retrieve detailed information about the face.
FaceListOperationsFaceListOperations 此类管理云中存储的 FaceList 构造,这些构造存储各种不同的人脸。This class manages the cloud-stored FaceList constructs, which store an assorted set of faces.
PersonGroupPersonOperationsPersonGroupPersonOperations 此类管理云中存储的 Person 构造,这些构造存储属于单个人员的一组人脸。This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person.
PersonGroupOperationsPersonGroupOperations 此类管理云中存储的 PersonGroup 构造,这些构造存储各种不同的 Person 对象。This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects.
ShapshotOperationsShapshotOperations 此类管理快照功能;可以使用它来暂时保存所有基于云的人脸数据,并将这些数据迁移到新的 Azure 订阅。This class manages the Snapshot functionality; you can use it to temporarily save all of your cloud-based face data and migrate that data to a new Azure subscription.

代码示例Code examples

这些代码片段演示如何使用适用于 Python 的人脸客户端库执行以下任务:These code snippets show you how to do the following tasks with the Face client library for Python:

验证客户端Authenticate the client

备注

本快速入门假设你已为人脸密钥创建了名为 FACE_SUBSCRIPTION_KEY 的环境变量This quickstart assumes you've created an environment variable for your Face key, named FACE_SUBSCRIPTION_KEY.

使用终结点和密钥实例化某个客户端。Instantiate a client with your endpoint and key. 使用密钥创建 CognitiveServicesCredentials 对象,然后在终结点上使用该对象创建 FaceClient 对象。Create a CognitiveServicesCredentials object with your key, and use it with your endpoint to create a FaceClient object.

# Create an authenticated FaceClient.
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))

在图像中检测人脸Detect faces in an image

以下代码检测远程图像中的人脸。The following code detects a face in a remote image. 它将检测到的人脸 ID 输出到控制台,并将其存储在程序内存中。It prints the detected face's ID to the console and also stores it in program memory. 然后,它在包含多个人员的图像中检测人脸,并将其 ID 输出到控制台。Then, it detects the faces in an image with multiple people and prints their IDs to the console as well. 更改 detect_with_url 方法中的参数可以返回包含每个 DetectedFace 对象的不同信息。By changing the parameters in the detect_with_url method, you can return different information with each DetectedFace object.

# Detect a face in an image that contains a single face
single_face_image_url = 'https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg'
single_image_name = os.path.basename(single_face_image_url)
detected_faces = face_client.face.detect_with_url(url=single_face_image_url)
if not detected_faces:
    raise Exception('No face detected from image {}'.format(single_image_name))

# Display the detected face ID in the first single-face image.
# Face IDs are used for comparison to faces (their IDs) detected in other images.
print('Detected face ID from', single_image_name, ':')
for face in detected_faces: print (face.face_id)
print()

# Save this ID for use in Find Similar
first_image_face_ID = detected_faces[0].face_id

有关更多检测方案,请参阅 GitHub 上的示例代码。See the sample code on GitHub for more detection scenarios.

显示和定格人脸Display and frame faces

下面的代码使用 DetectedFace.faceRectangle 属性将给定的图像输出到显示屏并在人脸周围绘制矩形。The following code outputs the given image to the display and draws rectangles around the faces, using the DetectedFace.faceRectangle property.

# Detect a face in an image that contains a single face
single_face_image_url = 'https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg'
single_image_name = os.path.basename(single_face_image_url)
detected_faces = face_client.face.detect_with_url(url=single_face_image_url)
if not detected_faces:
    raise Exception('No face detected from image {}'.format(single_image_name))

# Convert width height to a point in a rectangle
def getRectangle(faceDictionary):
    rect = faceDictionary.face_rectangle
    left = rect.left
    top = rect.top
    right = left + rect.width
    bottom = top + rect.height
    
    return ((left, top), (right, bottom))


# Download the image from the url
response = requests.get(single_face_image_url)
img = Image.open(BytesIO(response.content))

# For each face returned use the face rectangle and draw a red box.
print('Drawing rectangle around face... see popup for results.')
draw = ImageDraw.Draw(img)
for face in detected_faces:
    draw.rectangle(getRectangle(face), outline='red')

# Display the image in the users default image browser.
img.show()

一位年轻的妇女,其脸部周围绘制了一个红色矩形

查找相似人脸Find similar faces

以下代码采用检测到的单个人脸,并搜索其他一组人脸,以找到匹配项。The following code takes a single detected face and searches a set of other faces to find matches. 找到匹配项后,它会将匹配的人脸的矩形坐标输出到控制台。When it finds a match, it prints the rectangle coordinates of the matched face to the console.

查找匹配项Find matches

首先,运行上一部分(检测图像中的人脸)所示的代码,以保存对单个人脸的引用。First, run the code in the above section (Detect faces in an image) to save a reference to a single face. 然后运行以下代码,以获取对图像组中多个人脸的引用。Then run the following code to get references to several faces in a group image.

# Detect the faces in an image that contains multiple faces
# Each detected face gets assigned a new ID
multi_face_image_url = "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
multi_image_name = os.path.basename(multi_face_image_url)
detected_faces2 = face_client.face.detect_with_url(url=multi_face_image_url)

然后添加以下代码块,以查找该组中第一个人脸的实例。Then add the following code block to find instances of the first face in the group. 若要了解如何修改此行为,请参阅 find_similar 方法。See the find_similar method to learn how to modify this behavior.

# Search through faces detected in group image for the single face from first image.
# First, create a list of the face IDs found in the second image.
second_image_face_IDs = list(map(lambda x: x.face_id, detected_faces2))
# Next, find similar face IDs like the one detected in the first image.
similar_faces = face_client.face.find_similar(face_id=first_image_face_ID, face_ids=second_image_face_IDs)
if not similar_faces[0]:
    print('No similar faces found in', multi_image_name, '.')

使用以下代码将匹配详细信息输出到控制台。Use the following code to print the match details to the console.

# Print the details of the similar faces detected
print('Similar faces found in', multi_image_name + ':')
for face in similar_faces:
    first_image_face_ID = face.face_id
    # The similar face IDs of the single face image and the group image do not need to match, 
    # they are only used for identification purposes in each image.
    # The similar faces are matched using the Cognitive Services algorithm in find_similar().
    face_info = next(x for x in detected_faces2 if x.face_id == first_image_face_ID)
    if face_info:
        print('  Face ID: ', first_image_face_ID)
        print('  Face rectangle:')
        print('    Left: ', str(face_info.face_rectangle.left))
        print('    Top: ', str(face_info.face_rectangle.top))
        print('    Width: ', str(face_info.face_rectangle.width))
        print('    Height: ', str(face_info.face_rectangle.height))

创建和训练人员组Create and train a person group

以下代码创建包含三个不同 Person 对象的 PersonGroupThe following code creates a PersonGroup with three different Person objects. 它将每个 Person 与一组示例图像相关联,然后进行训练以便能够识别每个人。It associates each Person with a set of example images, and then it trains to be able to recognize each person.

创建 PersonGroupCreate PersonGroup

若要逐步完成此方案,需将以下图像保存到项目的根目录: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/imagesTo step through this scenario, you need to save the following images to the root directory of your project: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

此图像组包含三组人脸图像,这些图像对应于三个不同的人。This group of images contains three sets of face images corresponding to three different people. 该代码定义三个 Person 对象,并将其关联到以 womanmanchild 开头的图像文件。The code will define three Person objects and associate them with image files that start with woman, man, and child.

设置图像后,在脚本的顶部为要创建的 PersonGroup 对象定义一个标签。Once you've set up your images, define a label at the top of your script for the PersonGroup object you'll create.

# Used in the Person Group Operations,  Snapshot Operations, and Delete Person Group examples.
# You can call list_person_groups to print a list of preexisting PersonGroups.
# SOURCE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
PERSON_GROUP_ID = 'my-unique-person-group'

# Used for the Snapshot and Delete Person Group examples.
TARGET_PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)

然后将以下代码添加到脚本的底部。Then add the following code to the bottom of your script. 此代码创建一个 PersonGroup 对象和三个 Person 对象。This code creates a PersonGroup and three Person objects.

'''
Create the PersonGroup
'''
# Create empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
print('Person group:', PERSON_GROUP_ID)
face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID)

# Define woman friend
woman = face_client.person_group_person.create(PERSON_GROUP_ID, "Woman")
# Define man friend
man = face_client.person_group_person.create(PERSON_GROUP_ID, "Man")
# Define child friend
child = face_client.person_group_person.create(PERSON_GROUP_ID, "Child")

将人脸添加到 PersonAssign faces to Persons

以下代码按图像前缀对图像排序、检测人脸,然后将人脸分配到每个 Person 对象。The following code sorts your images by their prefix, detects faces, and assigns the faces to each Person object.

'''
Detect faces and register to correct person
'''
# Find all jpeg images of friends in working directory
woman_images = [file for file in glob.glob('*.jpg') if file.startswith("woman")]
man_images = [file for file in glob.glob('*.jpg') if file.startswith("man")]
child_images = [file for file in glob.glob('*.jpg') if file.startswith("child")]

# Add to a woman person
for image in woman_images:
    w = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, woman.person_id, w)

# Add to a man person
for image in man_images:
    m = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, man.person_id, m)

# Add to a child person
for image in child_images:
    ch = open(image, 'r+b')
    face_client.person_group_person.add_face_from_stream(PERSON_GROUP_ID, child.person_id, ch)

训练 PersonGroupTrain PersonGroup

分配人脸后,必须训练 PersonGroup,使其能够识别与其每个 Person 对象关联的视觉特征。Once you've assigned faces, you must train the PersonGroup so that it can identify the visual features associated with each of its Person objects. 以下代码调用异步 train 方法并轮询结果,然后将状态输出到控制台。The following code calls the asynchronous train method and polls the result, printing the status to the console.

'''
Train PersonGroup
'''
print()
print('Training the person group...')
# Train the person group
face_client.person_group.train(PERSON_GROUP_ID)

while (True):
    training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
    print("Training status: {}.".format(training_status.status))
    print()
    if (training_status.status is TrainingStatusType.succeeded):
        break
    elif (training_status.status is TrainingStatusType.failed):
        sys.exit('Training the person group has failed.')
    time.sleep(5)

识别人脸Identify a face

以下代码采用包含多个人脸的图像,并尝试在该图像中查找每个人的标识。The following code takes an image with multiple faces and looks to find the identity of each person in the image. 它将每个检测到的人脸与某个 PersonGroup(面部特征已知的不同 Person 对象的数据库)进行比较。It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known.

重要

若要运行此示例,必须先运行创建和训练人员组中的代码。In order to run this example, you must first run the code in Create and train a person group.

获取测试图像Get a test image

以下代码在项目根目录中查找图像 test-image-person-group.jpg,并检测该图像中的人脸。The following code looks in the root of your project for an image test-image-person-group.jpg and detects the faces in the image. 可以使用用于 PersonGroup 管理的图像查找此图像: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/imagesYou can find this image with the images used for PersonGroup management: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

'''
Identify a face against a defined PersonGroup
'''
# Group image for testing against
group_photo = 'test-image-person-group.jpg'
IMAGES_FOLDER = os.path.join(os.path.dirname(os.path.realpath(__file__)))
# Get test image
test_image_array = glob.glob(os.path.join(IMAGES_FOLDER, group_photo))
image = open(test_image_array[0], 'r+b')

# Detect faces
face_ids = []
faces = face_client.face.detect_with_stream(image)
for face in faces:
    face_ids.append(face.face_id)

标识人脸Identify faces

identify 方法采用检测到的人脸数组,并将其与 PersonGroup 进行比较。The identify method takes an array of detected faces and compares them to a PersonGroup. 如果检测到的某个人脸与某个人相匹配,则它会保存结果。****If it can match a detected face to a Person, it saves the result. 此代码将详细的匹配结果输出到控制台。This code prints detailed match results to the console.

# Identify faces
results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
print('Identifying faces in {}'.format(os.path.basename(image.name)))
if not results:
    print('No person identified in the person group for faces from {}.'.format(os.path.basename(image.name)))
for person in results:
    print('Person for face ID {} is identified in {} with a confidence of {}.'.format(person.face_id, os.path.basename(image.name), person.candidates[0].confidence)) # Get topmost confidence score

验证人脸Verify faces

验证操作采用某个人脸 ID 和其他人脸 ID 或 Person 对象,并确定它们是否属于同一个人****。The Verify operation takes a face ID and either another face ID or a Person object and determines whether they belong to the same person.

以下代码检测两个源图像中的人脸,然后针对从目标图像检测到的人脸来验证它们。The following code detects faces in two source images and then verifies them against a face detected from a target image.

获取测试图像Get test images

以下代码块声明将指向验证操作的源和目标图像的变量。The following code blocks declare variables that will point to the source and target images for the verification operation.

# Base url for the Verify and Facelist/Large Facelist operations
IMAGE_BASE_URL = 'https://csdx.blob.core.windows.net/resources/Face/Images/'
# Create a list to hold the target photos of the same person
target_image_file_names = ['Family1-Dad1.jpg', 'Family1-Dad2.jpg']
# The source photos contain this person
source_image_file_name1 = 'Family1-Dad3.jpg'
source_image_file_name2 = 'Family1-Son1.jpg'

检测人脸进行验证Detect faces for verification

以下代码检测源和目标图像中的人脸并将其保存到变量中。The following code detects faces in the source and target images and saves them to variables.

# Detect face(s) from source image 1, returns a list[DetectedFaces]
detected_faces1 = face_client.face.detect_with_url(IMAGE_BASE_URL + source_image_file_name1)
# Add the returned face's face ID
source_image1_id = detected_faces1[0].face_id
print('{} face(s) detected from image {}.'.format(len(detected_faces1), source_image_file_name1))

# Detect face(s) from source image 2, returns a list[DetectedFaces]
detected_faces2 = face_client.face.detect_with_url(IMAGE_BASE_URL + source_image_file_name2)
# Add the returned face's face ID
source_image2_id = detected_faces2[0].face_id
print('{} face(s) detected from image {}.'.format(len(detected_faces2), source_image_file_name2))

# List for the target face IDs (uuids)
detected_faces_ids = []
# Detect faces from target image url list, returns a list[DetectedFaces]
for image_file_name in target_image_file_names:
    detected_faces = face_client.face.detect_with_url(IMAGE_BASE_URL + image_file_name)
    # Add the returned face's face ID
    detected_faces_ids.append(detected_faces[0].face_id)
    print('{} face(s) detected from image {}.'.format(len(detected_faces), image_file_name))

获取验证结果Get verification results

以下代码将每个源图像与目标图像进行比较并打印出一条消息,指示它们是否属于同一个人。The following code compares each of the source images to the target image and prints a message indicating whether they belong to the same person.

# Verification example for faces of the same person. The higher the confidence, the more identical the faces in the images are.
# Since target faces are the same person, in this example, we can use the 1st ID in the detected_faces_ids list to compare.
verify_result_same = face_client.face.verify_face_to_face(source_image1_id, detected_faces_ids[0])
print('Faces from {} & {} are of the same person, with confidence: {}'
    .format(source_image_file_name1, target_image_file_names[0], verify_result_same.confidence)
    if verify_result_same.is_identical
    else 'Faces from {} & {} are of a different person, with confidence: {}'
        .format(source_image_file_name1, target_image_file_names[0], verify_result_same.confidence))

# Verification example for faces of different persons.
# Since target faces are same person, in this example, we can use the 1st ID in the detected_faces_ids list to compare.
verify_result_diff = face_client.face.verify_face_to_face(source_image2_id, detected_faces_ids[0])
print('Faces from {} & {} are of the same person, with confidence: {}'
    .format(source_image_file_name2, target_image_file_names[0], verify_result_diff.confidence)
    if verify_result_diff.is_identical
    else 'Faces from {} & {} are of a different person, with confidence: {}'
        .format(source_image_file_name2, target_image_file_names[0], verify_result_diff.confidence))

创建用于数据迁移的快照Take a snapshot for data migration

利用快照功能,可将已保存的人脸数据(例如训练的 PersonGroup)移到不同的 Azure 认知服务人脸订阅。The Snapshots feature lets you move your saved face data, such as a trained PersonGroup, to a different Azure Cognitive Services Face subscription. 例如,如果你使用免费试用订阅创建了一个 PersonGroup 对象,现在想要将其迁移到付费订阅,则可以使用此功能。You may want to use this feature if, for example, you've created a PersonGroup object using a free trial subscription and now want to migrate it to a paid subscription. 有关快照功能的大致概述,请参阅迁移人脸数据See the Migrate your face data for a broad overview of the Snapshots feature.

此示例将迁移你在创建和训练人员组中创建的 PersonGroupIn this example, you will migrate the PersonGroup you created in Create and train a person group. 可以先完成该部分,或者使用你自己的人脸数据构造。You can either complete that section first, or use your own Face data construct(s).

设置目标订阅Set up target subscription

首先,必须有另一个已包含人脸资源的 Azure 订阅;可以遵循设置部分中的步骤做好此准备。First, you must have a second Azure subscription with a Face resource; you can do this by following the steps in the Setting up section.

然后在脚本顶部附近创建以下变量。Then, create the following variables near the top of your script. 还需要为 Azure 帐户的订阅 ID 以及新(目标)帐户的密钥、终结点和订阅 ID 创建新的环境变量。You'll also need to create new environment variables for the subscription ID of your Azure account, as well as the key, endpoint, and subscription ID of your new (target) account.

'''
Snapshot operations variables
These are only used for the snapshot example. Set your environment variables accordingly.
'''
# Source endpoint, the location/subscription where the original person group is located.
SOURCE_ENDPOINT = ENDPOINT
# Source subscription key. Must match the source endpoint region.
SOURCE_KEY = os.environ['FACE_SUBSCRIPTION_KEY']
# Source subscription ID. Found in the Azure portal in the Overview page of your Face (or any) resource.
SOURCE_ID = os.environ['AZURE_SUBSCRIPTION_ID']
# Person group name that will get created in this quickstart's Person Group Operations example.
SOURCE_PERSON_GROUP_ID = PERSON_GROUP_ID
# Target endpoint. This is your 2nd Face subscription.
TARGET_ENDPOINT = os.environ['FACE_ENDPOINT2']
# Target subscription key. Must match the target endpoint region.
TARGET_KEY = os.environ['FACE_SUBSCRIPTION_KEY2']
# Target subscription ID. It will be the same as the source ID if created Face resources from the same 
# subscription (but moving from region to region). If they are differnt subscriptions, add the other target ID here.
TARGET_ID = os.environ['AZURE_SUBSCRIPTION_ID']
# NOTE: We do not need to specify the target PersonGroup ID here because we generate it with this example.
# Each new location you transfer a person group to will have a generated, new person group ID for that region.

对目标客户端进行身份验证Authenticate target client

稍后需要在脚本中将当前客户端对象保存为源客户端,然后对目标订阅的新客户端对象进行身份验证。Later in your script, save your current client object as the source client, and then authenticate a new client object for your target subscription.

'''
Authenticate
'''
# Use your source client already created (it has the person group ID you need in it).
face_client_source = face_client
# Create a new FaceClient instance for your target with authentication.
face_client_target = FaceClient(TARGET_ENDPOINT, CognitiveServicesCredentials(TARGET_KEY))

使用快照Use a snapshot

剩余的快照操作将在异步函数中进行。The rest of the snapshot operations take place within an asynchronous function.

  1. 第一步是创建快照,以将原始订阅的人脸数据保存到临时云位置。The first step is to take the snapshot, which saves your original subscription's face data to a temporary cloud location. 此方法返回用于查询操作状态的 ID。This method returns an ID that you use to query the status of the operation.

        '''
    Snapshot operations in 4 steps
    '''
    async def run():
        # STEP 1, take a snapshot of your person group, then track status.
        # This list must include all subscription IDs from which you want to access the snapshot.
        source_list = [SOURCE_ID, TARGET_ID]
        # You may have many sources, if transferring from many regions
        # remove any duplicates from the list. Passing the same subscription ID more than once causes
        # the Snapshot.take operation to fail.
        source_list = list(dict.fromkeys(source_list))
    
        # Note Snapshot.take is not asynchronous.
        # For information about Snapshot.take see:
        # https://github.com/Azure/azure-sdk-for-python/blob/master/azure-cognitiveservices-vision-face/azure/cognitiveservices/vision/face/operations/snapshot_operations.py#L36
        take_snapshot_result = face_client_source.snapshot.take(
            type=SnapshotObjectType.person_group,
            object_id=PERSON_GROUP_ID,
            apply_scope=source_list,
            # Set this to tell Snapshot.take to return the response; otherwise it returns None.
            raw=True
            )
        # Get operation ID from response for tracking
        # Snapshot.type return value is of type msrest.pipeline.ClientRawResponse. See:
        # https://docs.microsoft.com/en-us/python/api/msrest/msrest.pipeline.clientrawresponse?view=azure-python
        take_operation_id = take_snapshot_result.response.headers['Operation-Location'].replace('/operations/', '')
    
        print('Taking snapshot( operation ID:', take_operation_id, ')...')
    
  2. 接下来,不断查询该 ID,直到操作完成。Next, query the ID until the operation has completed.

        # STEP 2, Wait for snapshot taking to complete.
    take_status = await wait_for_operation(face_client_source, take_operation_id)
    
    # Get snapshot id from response.
    snapshot_id = take_status.resource_location.replace ('/snapshots/', '')
    
    print('Snapshot ID:', snapshot_id)
    print('Taking snapshot... Done\n')
    

    此代码使用应单独定义的 wait_for_operation 函数:This code makes use of the wait_for_operation function, which you should define separately:

        # Helper function that waits and checks status of API call processing.
    async def wait_for_operation(client, operation_id):
        # Track progress of taking the snapshot.
        # Note Snapshot.get_operation_status is not asynchronous.
        # For information about Snapshot.get_operation_status see:
        # https://github.com/Azure/azure-sdk-for-python/blob/master/azure-cognitiveservices-vision-face/azure/cognitiveservices/vision/face/operations/snapshot_operations.py#L466
        result = client.snapshot.get_operation_status(operation_id=operation_id)
    
        status = result.status.lower()
        print('Operation status:', status)
        if ('notstarted' == status or 'running' == status):
            print("Waiting 10 seconds...")
            await asyncio.sleep(10)
            result = await wait_for_operation(client, operation_id)
        elif ('failed' == status):
            raise Exception("Operation failed. Reason:" + result.message)
        return result
    
  3. 返回到异步函数。Go back to your asynchronous function. 使用 apply 操作将人脸数据写入目标订阅。Use the apply operation to write your face data to your target subscription. 此方法也会返回一个 ID。This method also returns an ID.

        # STEP 3, apply the snapshot to target region(s)
    # Snapshot.apply is not asynchronous.
    # For information about Snapshot.apply see:
    # https://github.com/Azure/azure-sdk-for-python/blob/master/azure-cognitiveservices-vision-face/azure/cognitiveservices/vision/face/operations/snapshot_operations.py#L366
    apply_snapshot_result = face_client_target.snapshot.apply(
        snapshot_id=snapshot_id,
        # Generate a new UUID for the target person group ID.
        object_id=TARGET_PERSON_GROUP_ID,
        # Set this to tell Snapshot.apply to return the response; otherwise it returns None.
        raw=True
        )
    apply_operation_id = apply_snapshot_result.response.headers['Operation-Location'].replace('/operations/', '')
    print('Applying snapshot( operation ID:', apply_operation_id, ')...')
    
  4. 再次使用 wait_for_operation 函数查询该 ID,直到操作完成。Again, use the wait_for_operation function to query the ID until the operation has completed.

        # STEP 4, wait for applying snapshot process to complete.
    await wait_for_operation(face_client_target, apply_operation_id)
    print('Applying snapshot... Done\n')
    print('End of transfer.')
    print()
    

完成这些步骤后,即可从新的(目标)订阅访问你的人脸数据构造。Once you've completed these steps, you'll be able to access your face data constructs from your new (target) subscription.

运行应用程序Run the application

在快速入门文件中使用 python 命令运行应用程序。Run the application with the python command on your quickstart file.

python quickstart-file.py

清理资源Clean up resources

如果想要清理并删除认知服务订阅,可以删除资源或资源组。If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. 删除资源组同时也会删除与之相关联的任何其他资源。Deleting the resource group also deletes any other resources associated with it.

如果你在本快速入门中创建了 PersonGroup 并想要删除它,请在脚本中运行以下代码:If you created a PersonGroup in this quickstart and you want to delete it, run the following code in your script:

# Delete the main person group.
face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
print("Deleted the person group {} from the source location.".format(PERSON_GROUP_ID))
print()

如果你在本快速入门中使用快照功能迁移了数据,则还需要删除保存到目标订阅的 PersonGroupIf you migrated data using the Snapshot feature in this quickstart, you'll also need to delete the PersonGroup saved to the target subscription.

# Delete the person group in the target region.
face_client_target.person_group.delete(TARGET_PERSON_GROUP_ID)
print("Deleted the person group {} from the target location.".format(TARGET_PERSON_GROUP_ID))

后续步骤Next steps

本快速入门介绍了如何使用适用于 Python 的人脸库来执行基本任务。In this quickstart, you learned how to use the Face library for Python to do basis tasks. 接下来,请在参考文档中详细了解该库。Next, explore the reference documentation to learn more about the library.

适用于 Go 的人脸客户端库入门。Get started with the Face client library for Go. 请按照以下步骤安装库并试用基本任务的示例。Follow these steps to install the library and try out our examples for basic tasks. 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

使用适用于 Go 的人脸服务客户端库可以:Use the Face service client library for Go to:

参考文档 | 库源代码 | SDK 下载Reference documentation | Library source code | SDK download

先决条件Prerequisites

设置Set up

创建人脸 Azure 资源Create a Face Azure resource

通过创建 Azure 资源开始使用人脸服务。Begin using the Face service by creating an Azure resource. 选择适合你的资源类型:Choose the resource type that's right for you:

  • 一个人脸服务资源A Face service resource:
    • 在删除资源前,可通过 Azure 门户使用。Available through the Azure portal until you delete the resource.
    • 使用免费定价层试用该服务,稍后升级到用于生产的付费层。Use the free pricing tier to try the service, and upgrade later to a paid tier for production.
  • 一个多服务资源A Multi-Service resource:
    • 在删除资源前,可通过 Azure 门户使用。Available through the Azure portal until you delete the resource.
    • 在多个认知服务中对应用程序使用相同的密钥和终结点。Use the same key and endpoint for your applications, across multiple Cognitive Services.

创建环境变量Create an environment variable

备注

在 2019 年 7 月 1 日之后创建的非试用资源的终结点使用如下所示的自定义子域格式。The endpoints for non-trial resources created after July 1, 2019 use the custom subdomain format shown below. 有关详细信息和区域终结点的完整列表,请参阅认知服务的自定义子域名For more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.

从创建的资源使用密钥和终结点,创建两个用于身份验证的环境变量:Using your key and endpoint from the resource you created, create two environment variables for authentication:

  • FACE_SUBSCRIPTION_KEY - 用于验证请求的资源密钥。FACE_SUBSCRIPTION_KEY - The resource key for authenticating your requests.
  • FACE_ENDPOINT - 用于发送 API 请求的资源终结点。FACE_ENDPOINT - The resource endpoint for sending API requests. 它将如下所示:It will look like this:
    • https://<your-custom-subdomain>.api.cognitive.microsoft.com

使用操作系统的说明。Use the instructions for your operating system.

setx FACE_SUBSCRIPTION_KEY <replace-with-your-product-name-key>
setx FACE_ENDPOINT <replace-with-your-product-name-endpoint>

添加环境变量后,请重启控制台窗口。After you add the environment variable, restart the console window.

创建 Go 项目目录Create a Go project directory

在控制台窗口(cmd、PowerShell、终端、Bash)中,为 Go 项目创建一个名为 my-app 的新工作区并导航到该工作区。In a console window (cmd, PowerShell, Terminal, Bash), create a new workspace for your Go project, named my-app, and navigate to it.

mkdir -p my-app/{src, bin, pkg}  
cd my-app

工作区包含三个文件夹:Your workspace will contain three folders:

  • src - 此目录包含源代码和包。src - This directory will contain source code and packages. 使用 go get 命令安装的任何包都驻留在此文件夹中。Any packages installed with the go get command will be in this folder.
  • pkg - 此目录包含编译的 Go 包对象。pkg - This directory will contain the compiled Go package objects. 这些文件的扩展名为 .aThese files all have a .a extension.
  • bin - 此目录包含运行 go install 时创建的二进制可执行文件。bin - This directory will contain the binary executable files that are created when you run go install.

安装适用于 Go 的客户端库Install the client library for Go

接下来,安装适用于 Go 的客户端库:Next, install the client library for Go:

go get -u github.com/Azure/azure-sdk-for-go/tree/master/services/cognitiveservices/v1.0/face

或者,如果使用 dep,则在存储库中运行:or if you use dep, within your repo run:

dep ensure -add https://github.com/Azure/azure-sdk-for-go/tree/master/services/cognitiveservices/v1.0/face

创建 Go 应用程序Create a Go application

接下来,在 src 目录中创建名为 sample-app.go 的文件****:Next, create a file in the src directory named sample-app.go:

cd src
touch sample-app.go

在首选 IDE 或文本编辑器中打开 sample-app.goOpen sample-app.go in your preferred IDE or text editor. 然后添加包名称并导入以下库:Then add the package name and import the following libraries:

package main

import (
    "encoding/json"
    "container/list"
    "context"
    "fmt"
    "github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v1.0/face"
    "github.com/Azure/go-autorest/autorest"
    "github.com/satori/go.uuid"
    "io"
    "io/ioutil"
    "log"
    "os"
    "path"
    "strconv"
    "strings"
    "time"
)

接下来,开始添加代码以执行不同的人脸服务操作。Next, you'll begin adding code to carry out different Face service operations.

对象模型Object model

以下类和接口用于处理人脸服务 Go 客户端库的某些主要功能。The following classes and interfaces handle some of the major features of the Face service Go client library.

名称Name 说明Description
BaseClientBaseClient 此类代表使用人脸服务的授权,使用所有人脸功能时都需要用到它。This class represents your authorization to use the Face service, and you need it for all Face functionality. 请使用你的订阅信息实例化此类,然后使用它来生成其他类的实例。You instantiate it with your subscription information, and you use it to produce instances of other classes.
客户端Client 此类处理可对人脸执行的基本检测和识别任务。This class handles the basic detection and recognition tasks that you can do with human faces.
DetectedFaceDetectedFace 此类代表已从图像中的单个人脸检测到的所有数据。This class represents all of the data that was detected from a single face in an image. 可以使用它来检索有关人脸的详细信息。You can use it to retrieve detailed information about the face.
ListClientListClient 此类管理云中存储的 FaceList 构造,这些构造存储各种不同的人脸。This class manages the cloud-stored FaceList constructs, which store an assorted set of faces.
PersonGroupPersonClientPersonGroupPersonClient 此类管理云中存储的 Person 构造,这些构造存储属于单个人员的一组人脸。This class manages the cloud-stored Person constructs, which store a set of faces that belong to a single person.
PersonGroupClientPersonGroupClient 此类管理云中存储的 PersonGroup 构造,这些构造存储各种不同的 Person 对象。This class manages the cloud-stored PersonGroup constructs, which store a set of assorted Person objects.
SnapshotClientSnapshotClient 此类管理快照功能。This class manages the Snapshot functionality. 可以使用它来暂时保存所有基于云的人脸数据,并将这些数据迁移到新的 Azure 订阅。You can use it to temporarily save all of your cloud-based Face data and migrate that data to a new Azure subscription.

代码示例Code examples

这些代码示例演示如何使用适用于 Go 的人脸服务客户端库来完成基本任务:These code samples show you how to complete basic tasks using the Face service client library for Go:

验证客户端Authenticate the client

备注

本快速入门假设已经为人脸密钥和终结点(分别名为 FACE_SUBSCRIPTION_KEYFACE_ENDPOINT创建了环境变量This quickstart assumes you've created environment variables for your Face key and endpoint, named FACE_SUBSCRIPTION_KEY and FACE_ENDPOINT respectively.

创建 main 函数,并在其中添加以下代码,以使用终结点和密钥实例化客户端。Create a main function and add the following code to it to instantiate a client with your endpoint and key. 使用密钥创建 CognitiveServicesAuthorizer 对象,然后在终结点上使用该对象创建 Client 对象。You create a CognitiveServicesAuthorizer object with your key, and use it with your endpoint to create a Client object. 此代码还将实例化一个上下文对象,创建客户端对象时需要该上下文对象。This code also instantiates a context object, which is needed for the creation of client objects. 它还会定义一个远程位置,可在其中找到本快速入门中的一些示例图像。It also defines a remote location where some of the sample images in this quickstart are found.

func main() {

    // A global context for use in all samples
    faceContext := context.Background()

    // Base url for the Verify and Large Face List examples
    const imageBaseURL = "https://csdx.blob.core.windows.net/resources/Face/Images/"

    /*
    Authenticate
    */
    // Add FACE_SUBSCRIPTION_KEY, FACE_ENDPOINT, and AZURE_SUBSCRIPTION_ID to your environment variables.
    subscriptionKey := os.Getenv("FACE_SUBSCRIPTION_KEY")
    
    // This is also known as the 'source' endpoint for the Snapshot example
    endpoint := os.Getenv("FACE_ENDPOINT")

    // Client used for Detect Faces, Find Similar, and Verify examples.
    client := face.NewClient(endpoint)
    client.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)
    /*
    END - Authenticate
    */

在图像中检测人脸Detect faces in an image

main 方法中添加以下代码。Add the following code in your main method. 此代码定义一个远程示例图像,并指定要从该图像中提取哪些人脸特征。This code defines a remote sample image and specifies which face features to extract from the image. 它还会指定要使用哪个 AI 模型从检测到的人脸中提取数据。It also specifies which AI model to use to extract data from the detected face(s). 有关这些选项的信息,请参阅指定识别模型See Specify a recognition model for information on these options. 最后, DetectWithURL 方法针对图像执行人脸检测操作,并将结果保存到程序内存中。Finally, the DetectWithURL method does the face detection operation on the image and saves the results in program memory.

// Detect a face in an image that contains a single face
singleFaceImageURL := "https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg" 
singleImageURL := face.ImageURL { URL: &singleFaceImageURL } 
singleImageName := path.Base(singleFaceImageURL)
// Use recognition model 2 for feature extraction. Recognition model 1 is used to simply recogize faces.
recognitionModel02 := face.Recognition02
// Array types chosen for the attributes of Face
attributes := []face.AttributeType {"age", "emotion", "gender"}
returnFaceID := true
returnRecognitionModel := false
returnFaceLandmarks := false

// API call to detect faces in single-faced image, using recognition model 2
detectSingleFaces, dErr := client.DetectWithURL(faceContext, singleImageURL, &returnFaceID, &returnFaceLandmarks, attributes, recognitionModel02, &returnRecognitionModel)
if dErr != nil { log.Fatal(dErr) }

// Dereference *[]DetectedFace, in order to loop through it.
dFaces := *detectSingleFaces.Value

显示检测到的人脸数据Display detected face data

下一个代码块采用 DetectedFace 对象数组中的第一个元素,并将其特性输出到控制台。The next block of code takes the first element in the array of DetectedFace objects and prints its attributes to the console. 如果使用了包含多个人脸的图像,则应改为迭代该数组。If you used an image with multiple faces, you should iterate through the array instead.

fmt.Println("Detected face in (" + singleImageName + ") with ID(s): ")
fmt.Println(dFaces[0].FaceID)
fmt.Println()
// Find/display the age and gender attributes
for _, dFace := range dFaces { 
    fmt.Println("Face attributes:")
    fmt.Printf("  Age: %.0f", *dFace.FaceAttributes.Age) 
    fmt.Println("\n  Gender: " + dFace.FaceAttributes.Gender) 
} 
// Get/display the emotion attribute
emotionStruct := *dFaces[0].FaceAttributes.Emotion
// Convert struct to a map
var emotionMap map[string]float64
result, _ := json.Marshal(emotionStruct)
json.Unmarshal(result, &emotionMap)
// Find the emotion with the highest score (confidence level). Range is 0.0 - 1.0.
var highest float64 
emotion := ""
dScore := -1.0
for name, value := range emotionMap{
    if (value > highest) {
        emotion, dScore = name, value
        highest = value
    }
}
fmt.Println("  Emotion: " + emotion + " (score: " + strconv.FormatFloat(dScore, 'f', 3, 64) + ")")

查找相似人脸Find similar faces

以下代码采用检测到的单个人脸(源),并搜索其他一组人脸(目标),以找到匹配项。The following code takes a single detected face (source) and searches a set of other faces (target) to find matches. 找到匹配项后,它会将匹配的人脸的 ID 输出到控制台。When it finds a match, it prints the ID of the matched face to the console.

检测人脸以进行比较Detect faces for comparison

首先,保存对检测图像中的人脸部分中检测到的人脸的引用。First, save a reference to the face you detected in the Detect faces in an image section. 此人脸将是源。This face will be the source.

// Select an ID in single-faced image for comparison to faces detected in group image. Used in Find Similar.
firstImageFaceID := dFaces[0].FaceID

然后输入以下代码,以检测不同图像中的一组人脸。Then enter the following code to detect a set of faces in a different image. 这些人脸将是目标。These faces will be the target.

// Detect the faces in an image that contains multiple faces
groupImageURL := "http://www.historyplace.com/kennedy/president-family-portrait-closeup.jpg"
groupImageName := path.Base(groupImageURL)
groupImage := face.ImageURL { URL: &groupImageURL } 

// API call to detect faces in group image, using recognition model 2. This returns a ListDetectedFace struct.
detectedGroupFaces, dgErr := client.DetectWithURL(faceContext, groupImage, &returnFaceID, &returnFaceLandmarks, nil, recognitionModel02, &returnRecognitionModel)
if dgErr != nil { log.Fatal(dgErr) }
fmt.Println()

// Detect faces in the group image.
// Dereference *[]DetectedFace, in order to loop through it.
dFaces2 := *detectedGroupFaces.Value
// Make slice list of UUIDs
faceIDs := make([]uuid.UUID, len(dFaces2))
fmt.Print("Detected faces in (" + groupImageName + ") with ID(s):\n")
for i, face := range dFaces2 {
    faceIDs[i] = *face.FaceID // Dereference DetectedFace.FaceID
    fmt.Println(*face.FaceID)
}

查找匹配项Find matches

以下代码使用 FindSimilar 方法来查找与源人脸匹配的所有目标人脸。The following code uses the FindSimilar method to find all of the target faces that match the source face.

// Add single-faced image ID to struct
findSimilarBody := face.FindSimilarRequest { FaceID: firstImageFaceID, FaceIds: &faceIDs }
// Get the list of similar faces found in the group image of previously detected faces
listSimilarFaces, sErr := client.FindSimilar(faceContext, findSimilarBody)
if sErr != nil { log.Fatal(sErr) }

// The *[]SimilarFace 
simFaces := *listSimilarFaces.Value

以下代码将匹配详细信息输出到控制台。The following code prints the match details to the console.

// Print the details of the similar faces detected 
fmt.Print("Similar faces found in (" + groupImageName + ") with ID(s):\n")
var sScore float64
for _, face := range simFaces {
    fmt.Println(face.FaceID)
    // Confidence of the found face with range 0.0 to 1.0.
    sScore = *face.Confidence
    fmt.Println("The similarity confidence: ", strconv.FormatFloat(sScore, 'f', 3, 64))
}

创建和训练人员组Create and train a person group

若要逐步完成此方案,需将以下图像保存到项目的根目录: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/imagesTo step through this scenario, you need to save the following images to the root directory of your project: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

此图像组包含三组单一人脸图像(对应于三个不同的人)。This group of images contains three sets of single-face images that correspond to three different people. 该代码定义三个 PersonGroup Person 对象,并将其关联到以 womanmanchild 开头的图像文件。The code will define three PersonGroup Person objects and associate them with image files that start with woman, man, and child.

创建 PersonGroupCreate PersonGroup

下载图像后,请将以下代码添加到 main 方法的底部。Once you've downloaded your images, add the following code to the bottom of your main method. 此代码对 PersonGroupClient 对象进行身份验证,然后使用它来定义新的 PersonGroupThis code authenticates a PersonGroupClient object and then uses it to define a new PersonGroup.

// Get working directory
root, rootErr := os.Getwd()
if rootErr != nil { log.Fatal(rootErr) }

// Full path to images folder
imagePathRoot := path.Join(root+"\\images\\")

// Authenticate - Need a special person group client for your person group
personGroupClient := face.NewPersonGroupClient(endpoint)
personGroupClient.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)

// Create the Person Group
// Create an empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
personGroupID := "unique-person-group"
fmt.Println("Person group ID: " + personGroupID)
metadata := face.MetaDataContract { Name: &personGroupID }

// Create the person group
personGroupClient.Create(faceContext, personGroupID, metadata)

创建 PersonGroup PersonCreate PersonGroup Persons

下一个代码块对 PersonGroupPersonClient 进行身份验证,并使用它来定义三个新的 PersonGroup Person 对象。The next block of code authenticates a PersonGroupPersonClient and uses it to define three new PersonGroup Person objects. 其中的每个对象表示图像集中的一个人。These objects each represent a single person in the set of images.

// Authenticate - Need a special person group person client for your person group person
personGroupPersonClient := face.NewPersonGroupPersonClient(endpoint)
personGroupPersonClient.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)

// Create each person group person for each group of images (woman, man, child)
// Define woman friend
w := "Woman"
nameWoman := face.NameAndUserDataContract { Name: &w }
// Returns a Person type
womanPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameWoman)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Woman person ID: ")
fmt.Println(womanPerson.PersonID)
// Define man friend
m := "Man"
nameMan := face.NameAndUserDataContract { Name: &m }
// Returns a Person type
manPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameMan)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Man person ID: ")
fmt.Println(manPerson.PersonID)
// Define child friend
ch := "Child"
nameChild := face.NameAndUserDataContract { Name: &ch }
// Returns a Person type
childPerson, wErr := personGroupPersonClient.Create(faceContext, personGroupID, nameChild)
if wErr != nil { log.Fatal(wErr) }
fmt.Print("Child person ID: ")
fmt.Println(childPerson.PersonID)

将人脸添加到 PersonAssign faces to Persons

以下代码按图像前缀排序图像,检测人脸,并根据图像文件名将人脸分配到每个相关的 PersonGroup Person 对象。The following code sorts the images by their prefix, detects faces, and assigns the faces to each respective PersonGroup Person object, based on the image file name.

// Detect faces and register to correct person
// Lists to hold all their person images
womanImages := list.New()
manImages := list.New()
childImages := list.New()

// Collect the local images for each person, add them to their own person group person
images, fErr := ioutil.ReadDir(imagePathRoot)
if fErr != nil { log.Fatal(fErr)}
for _, f := range images {
    path:= (imagePathRoot+f.Name())
    if strings.HasPrefix(f.Name(), "w") {
        var wfile io.ReadCloser
        wfile, err:= os.Open(path)
        if err != nil { log.Fatal(err) }
        womanImages.PushBack(wfile)
        personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *womanPerson.PersonID, wfile, "", nil)
    }
    if strings.HasPrefix(f.Name(), "m") {
        var mfile io.ReadCloser
        mfile, err:= os.Open(path)
        if err != nil { log.Fatal(err) }
        manImages.PushBack(mfile)
        personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *manPerson.PersonID, mfile, "", nil)
    }
    if strings.HasPrefix(f.Name(), "ch") {
        var chfile io.ReadCloser
        chfile, err:= os.Open(path)
        if err != nil { log.Fatal(err) }
        childImages.PushBack(chfile)
        personGroupPersonClient.AddFaceFromStream(faceContext, personGroupID, *childPerson.PersonID, chfile, "", nil)
    }
}

训练 PersonGroupTrain PersonGroup

分配人脸后,请训练 PersonGroup,使其能够识别与其每个 Person 对象关联的视觉特征。Once you've assigned faces, you train the PersonGroup so it can identify the visual features associated with each of its Person objects. 以下代码调用异步 train 方法并轮询结果,然后将状态输出到控制台。The following code calls the asynchronous train method and polls the result, printing the status to the console.

// Train the person group
personGroupClient.Train(faceContext, personGroupID)

// Wait for it to succeed in training
for {
    trainingStatus, tErr := personGroupClient.GetTrainingStatus(faceContext, personGroupID)
    if tErr != nil { log.Fatal(tErr) }
    
    if trainingStatus.Status == "succeeded" {
        fmt.Println("Training status:", trainingStatus.Status)
        break
    }
    time.Sleep(2)
}

识别人脸Identify a face

以下代码采用包含多个人脸的图像,并尝试在该图像中查找每个人的标识。The following code takes an image with multiple faces and looks to find the identity of each person in the image. 它将每个检测到的人脸与某个 PersonGroup(面部特征已知的不同 Person 对象的数据库)进行比较。It compares each detected face to a PersonGroup, a database of different Person objects whose facial features are known.

重要

若要运行此示例,必须先运行创建和训练人员组中的代码。In order to run this example, you must first run the code in Create and train a person group.

获取测试图像Get a test image

以下代码在项目根目录中查找图像 test-image-person-group.jpg,并将其载入程序内存。The following code looks in the root of your project for an image test-image-person-group.jpg and loads it into program memory. 可以在创建和训练人员组中使用的图像所在的同一个存储库中找到此图像: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/imagesYou can find this image in the same repo as the images used in Create and train a person group: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/Face/images.

personGroupTestImageName := "test-image-person-group.jpg"
// Use image path root from the one created in person group
personGroupTestImagePath := imagePathRoot
var personGroupTestImage io.ReadCloser
// Returns a ReaderCloser
personGroupTestImage, identErr:= os.Open(personGroupTestImagePath+personGroupTestImageName)
if identErr != nil { log.Fatal(identErr) }

检测测试图像中的源人脸Detect source faces in test image

下一个代码块针对测试图像执行普通的人脸检测,以检索所有人脸并将其保存到数组中。The next code block does ordinary face detection on the test image to retrieve all of the faces and save them to an array.

// Detect faces in group test image, using recognition model 1 (default)
returnIdentifyFaceID := true
// Returns a ListDetectedFaces
detectedTestImageFaces, dErr := client.DetectWithStream(faceContext, personGroupTestImage, &returnIdentifyFaceID, nil, nil, face.Recognition01, nil)
if dErr != nil { log.Fatal(dErr) }

// Make list of face IDs from the detection. 
length := len(*detectedTestImageFaces.Value)
testImageFaceIDs := make([]uuid.UUID, length)
// ListDetectedFace is a struct with a Value property that returns a *[]DetectedFace
for i, f := range *detectedTestImageFaces.Value {
    testImageFaceIDs[i] = *f.FaceID
}

标识人脸Identify faces

Identify 方法采用检测到的人脸的数组,并将其与给定的 PersonGroup(已在前一部分定义并训练)进行比较。The Identify method takes the array of detected faces and compares them to the given PersonGroup (defined and trained in the earlier section). 如果检测到的某个人脸与组中的某个人相匹配,则它会保存结果。****If it can match a detected face to a Person in the group, it saves the result.

// Identify the faces in the test image with everyone in the person group as a query
identifyRequestBody := face.IdentifyRequest { FaceIds: &testImageFaceIDs, PersonGroupID: &personGroupID }
identifiedFaces, err := client.Identify(faceContext, identifyRequestBody)
if err != nil { log.Fatal(err) }

然后,此代码将详细的匹配结果输出到控制台。This code then prints detailed match results to the console.

// Get the result which person(s) were identified
iFaces := *identifiedFaces.Value
for _, person := range iFaces {
    fmt.Println("Person for face ID: " )
    fmt.Print(person.FaceID)
    fmt.Println(" is identified in " + personGroupTestImageName + ".")
}

验证人脸Verify faces

验证操作采用某个人脸 ID 和其他人脸 ID 或 Person 对象,并确定它们是否属于同一个人****。The Verify operation takes a face ID and either another face ID or a Person object and determines whether they belong to the same person.

以下代码检测两个源图像中的人脸,然后根据目标图像中检测到的人脸来验证源图像中的每个人脸。The following code detects faces in two source images and then verifies each of them against a face detected from a target image.

获取测试图像Get test images

以下代码块声明将指向验证操作的目标和源图像的变量。The following code blocks declare variables that will point to the target and source images for the verification operation.

// Create a slice list to hold the target photos of the same person
targetImageFileNames :=  make([]string, 2)
targetImageFileNames[0] = "Family1-Dad1.jpg"
targetImageFileNames[1] = "Family1-Dad2.jpg"

// The source photos contain this person, maybe
sourceImageFileName1 := "Family1-Dad3.jpg"
sourceImageFileName2 := "Family1-Son1.jpg"

检测人脸进行验证Detect faces for verification

以下代码检测源和目标图像中的人脸并将其保存到变量中。The following code detects faces in the source and target images and saves them to variables.

// DetectWithURL parameters
urlSource1 := imageBaseURL + sourceImageFileName1
urlSource2 := imageBaseURL + sourceImageFileName2
url1 :=  face.ImageURL { URL: &urlSource1 }
url2 := face.ImageURL { URL: &urlSource2 }
returnFaceIDVerify := true
returnFaceLandmarksVerify := false
returnRecognitionModelVerify := false
// Recognition model 1 is used to rcognise a face, not extract features from it.
recognitionModel01 := face.Recognition01

// Detect face(s) from source image 1, returns a ListDetectedFace struct
detectedVerifyFaces1, dErrV1 := client.DetectWithURL(faceContext, url1 , &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, recognitionModel01, &returnRecognitionModelVerify)
if dErrV1 != nil { log.Fatal(dErrV1) }
// Dereference the result, before getting the ID
dVFaceIds1 := *detectedVerifyFaces1.Value 
// Get ID of the detected face
imageSource1Id := dVFaceIds1[0].FaceID
fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaceIds1), sourceImageFileName1))

// Detect face(s) from source image 2, returns a ListDetectedFace struct
detectedVerifyFaces2, dErrV2 := client.DetectWithURL(faceContext, url2 , &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, recognitionModel01, &returnRecognitionModelVerify)
if dErrV2 != nil { log.Fatal(dErrV2) }
// Dereference the result, before getting the ID
dVFaceIds2 := *detectedVerifyFaces2.Value 
// Get ID of the detected face
imageSource2Id := dVFaceIds2[0].FaceID
fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaceIds2), sourceImageFileName2))
// Detect faces from each target image url in list. DetectWithURL returns a VerifyResult with Value of list[DetectedFaces]
// Empty slice list for the target face IDs (UUIDs)
var detectedVerifyFacesIds [2]uuid.UUID
for i, imageFileName := range targetImageFileNames {
    urlSource := imageBaseURL + imageFileName 
    url :=  face.ImageURL { URL: &urlSource}
    detectedVerifyFaces, dErrV := client.DetectWithURL(faceContext, url, &returnFaceIDVerify, &returnFaceLandmarksVerify, nil, recognitionModel01, &returnRecognitionModelVerify)
    if dErrV != nil { log.Fatal(dErrV) }
    // Dereference *[]DetectedFace from Value in order to loop through it.
    dVFaces := *detectedVerifyFaces.Value
    // Add the returned face's face ID
    detectedVerifyFacesIds[i] = *dVFaces[0].FaceID
    fmt.Println(fmt.Sprintf("%v face(s) detected from image: %v", len(dVFaces), imageFileName))
}

获取验证结果Get verification results

以下代码将每个源图像与目标图像进行比较并打印出一条消息,指示它们是否属于同一个人。The following code compares each of the source images to the target image and prints a message indicating whether they belong to the same person.

// Verification example for faces of the same person. The higher the confidence, the more identical the faces in the images are.
// Since target faces are the same person, in this example, we can use the 1st ID in the detectedVerifyFacesIds list to compare.
verifyRequestBody1 := face.VerifyFaceToFaceRequest{ FaceID1: imageSource1Id, FaceID2: &detectedVerifyFacesIds[0] }
verifyResultSame, vErrSame := client.VerifyFaceToFace(faceContext, verifyRequestBody1)
if vErrSame != nil { log.Fatal(vErrSame) }

fmt.Println()

// Check if the faces are from the same person.
if (*verifyResultSame.IsIdentical) {
    fmt.Println(fmt.Sprintf("Faces from %v & %v are of the same person, with confidence %v", 
    sourceImageFileName1, targetImageFileNames[0], strconv.FormatFloat(*verifyResultSame.Confidence, 'f', 3, 64)))
} else {
    // Low confidence means they are more differant than same.
    fmt.Println(fmt.Sprintf("Faces from %v & %v are of a different person, with confidence %v", 
    sourceImageFileName1, targetImageFileNames[0], strconv.FormatFloat(*verifyResultSame.Confidence, 'f', 3, 64)))
}

// Verification example for faces of different persons. 
// Since target faces are same person, in this example, we can use the 1st ID in the detectedVerifyFacesIds list to compare.
verifyRequestBody2 := face.VerifyFaceToFaceRequest{ FaceID1: imageSource2Id, FaceID2: &detectedVerifyFacesIds[0] }
verifyResultDiff, vErrDiff := client.VerifyFaceToFace(faceContext, verifyRequestBody2)
if vErrDiff != nil { log.Fatal(vErrDiff) }
// Check if the faces are from the same person.
if (*verifyResultDiff.IsIdentical) {
    fmt.Println(fmt.Sprintf("Faces from %v & %v are of the same person, with confidence %v", 
    sourceImageFileName2, targetImageFileNames[0], strconv.FormatFloat(*verifyResultDiff.Confidence, 'f', 3, 64)))
} else {
    // Low confidence means they are more differant than same.
    fmt.Println(fmt.Sprintf("Faces from %v & %v are of a different person, with confidence %v", 
    sourceImageFileName2, targetImageFileNames[0], strconv.FormatFloat(*verifyResultDiff.Confidence, 'f', 3, 64)))
}

创建用于数据迁移的快照Take a snapshot for data migration

利用快照功能,可将已保存的人脸数据(例如训练的 PersonGroup)移到不同的 Azure 认知服务人脸订阅。The Snapshots feature lets you move your saved face data, such as a trained PersonGroup, to a different Azure Cognitive Services Face subscription. 例如,如果你使用免费试用订阅创建了一个 PersonGroup 对象,现在想要将其迁移到付费订阅,则可以使用此功能。You might use this feature if, for example, you've created a PersonGroup object using a free trial subscription and now want to migrate it to a paid subscription. 有关快照功能的大致概述,请参阅迁移人脸数据See the Migrate your face data for a broad overview of the Snapshots feature.

此示例将迁移你在创建和训练人员组中创建的 PersonGroupIn this example, you'll migrate the PersonGroup you created in Create and train a person group. 可以先完成该部分,或者使用你自己的人脸数据构造。You can either complete that section first, or use your own Face data construct(s).

设置目标订阅Set up target subscription

首先,必须有另一个已包含人脸资源的 Azure 订阅;为此,可以重复设置部分中的步骤。First, you must have a second Azure subscription with a Face resource; you can do this by repeating the steps in the Set up section.

然后在 main 方法的顶部附近创建以下变量。Then, create the following variables near the top of your main method. 还需要为 Azure 帐户的订阅 ID 以及新(目标)帐户的密钥、终结点和订阅 ID 创建新的环境变量。You'll also need to create new environment variables for the subscription ID of your Azure account, as well as the key, endpoint, and subscription ID of your new (target) account.

// This key should be from another Face resource with a different region. 
// Used for the Snapshot example only.
targetSubscriptionKey := os.Getenv("FACE_SUBSCRIPTION_KEY2")

// This should have a different region than your source endpoint. used only in Snapshot.
targetEndpoint := os.Getenv("FACE_ENDPOINT2")

// Get your subscription ID (different than the key) from any Face resource in Azure.
azureSubscriptionID, uuidErr := uuid.FromString(os.Getenv("AZURE_SUBSCRIPTION_ID"))

然后,将订阅 ID 值放入某个数组,以便在后续步骤中使用。Then, put your subscription ID value into an array for the next steps.

// Add your Azure subscription ID(s) to a UUID array.
numberOfSubKeys := 1 
targetUUIDArray := make([]uuid.UUID, numberOfSubKeys)
for i := range targetUUIDArray {
    targetUUIDArray[i] = azureSubscriptionID
}

对目标客户端进行身份验证Authenticate target client

稍后需要在脚本中将原始客户端对象保存为源客户端,并对目标订阅的新客户端对象进行身份验证。Later in your script, save your original client object as the source client, and then authenticate a new client object for your target subscription.

// Create a client from your source region, where your person group exists. Use for taking the snapshot.
snapshotSourceClient := face.NewSnapshotClient(endpoint)
snapshotSourceClient.Authorizer = autorest.NewCognitiveServicesAuthorizer(subscriptionKey)
// Create a client for your target region. Use for applying the snapshot.
snapshotTargetClient := face.NewSnapshotClient(targetEndpoint)
snapshotTargetClient.Authorizer = autorest.NewCognitiveServicesAuthorizer(targetSubscriptionKey)

生成快照Take a snapshot

下一步是使用 Take 创建快照,以将原始订阅的人脸数据保存到临时云位置。The next step is to take the snapshot with Take, which saves your original subscription's face data to a temporary cloud location. 此方法返回用于查询操作状态的 ID。This method returns an ID that you use to query the status of the operation.

// Take snapshot
takeBody := face.TakeSnapshotRequest { Type: face.SnapshotObjectTypePersonGroup, ObjectID: &personGroupID, ApplyScope: &targetUUIDArray }
takeSnapshotResult, takeErr := snapshotSourceClient.Take(faceContext, takeBody)
if takeErr != nil { log.Fatal(takeErr) }
// Get the operations ID
strTakeOperation := strings.ReplaceAll(takeSnapshotResult.Header.Get("Operation-Location"), "/operations/", "")
fmt.Println("Taking snapshot (operations ID: " + strTakeOperation + ")... started")
// Convert string operation ID to UUID
takeOperationID, uuidErr := uuid.FromString(strTakeOperation)
if uuidErr != nil { log.Fatal(uuidErr) }

接下来,不断查询该 ID,直到操作完成。Next, query the ID until the operation has completed.

// Wait for the snapshot taking to finish
var strSnapshotID string
for {
    takeSnapshotStatus, tErr := snapshotSourceClient.GetOperationStatus(faceContext, takeOperationID)
    if tErr != nil { log.Fatal(tErr) }
    
    if takeSnapshotStatus.Status == "succeeded" {
        fmt.Println("Taking snapshot operation status: ", takeSnapshotStatus.Status)
        strSnapshotID = strings.ReplaceAll(*takeSnapshotStatus.ResourceLocation, "/snapshots/", "")
        break
    }
    time.Sleep(2)
}

// Convert string snapshot to UUID
snapshotID, uuidErr := uuid.FromString(strSnapshotID)
if uuidErr != nil { log.Fatal(uuidErr) }

应用快照Apply the snapshot

使用 Apply 操作将新上传的人脸数据写入目标订阅。Use the Apply operation to write your newly uploaded face data to your target subscription. 此方法也会返回一个 ID。This method also returns an ID.

// Creates a new snapshot instance in your target region. 
// Make sure not to create a new snapshot in your target region with the same name as another one.
applyBody := face.ApplySnapshotRequest { ObjectID: &personGroupID }
applySnapshotResult, applyErr := snapshotTargetClient.Apply(faceContext, snapshotID, applyBody)
if applyErr != nil { log.Fatal(applyErr) }

// Get operation ID from response to track the progress of applying a snapshot.
strApplyOperation := strings.ReplaceAll(applySnapshotResult.Header.Get("Operation-Location"), "/operations/", "")
fmt.Println("Applying snapshot (operations ID: " + strApplyOperation + ")... started")
// Convert operation ID to GUID
applyOperationID, guidErr := uuid.FromString(strApplyOperation)
if guidErr != nil { log.Fatal(guidErr) }

同样,请不断查询该 ID,直到操作完成。Again, query the ID until the operation has completed.

// Wait for the snapshot applying to finish
for {
    applySnapshotStatus, aErr := snapshotTargetClient.GetOperationStatus(faceContext, applyOperationID)
    if aErr != nil { log.Fatal(aErr) }
    
    if applySnapshotStatus.Status == "succeeded" {
        fmt.Println("Taking snapshot operation status: ", applySnapshotStatus.Status)
        break
    }
    time.Sleep(2)
}

完成这些步骤后,即可从新的(目标)订阅访问你的人脸数据构造。Once you've completed these steps, you can access your face data constructs from your new (target) subscription.

运行应用程序Run the application

从应用程序目录使用 go run [arguments] 命令运行 Go 应用程序。Run your Go application with the go run [arguments] command from your application directory.

go run sample-app.go

清理资源Clean up resources

如果想要清理并删除认知服务订阅,可以删除资源或资源组。If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. 删除资源组同时也会删除与之相关联的任何其他资源。Deleting the resource group also deletes any other resources associated with it.

如果你在本快速入门中创建了一个 PersonGroup 但想要删除它,请调用 Delete 方法。If you created a PersonGroup in this quickstart and you want to delete it, call the Delete method. 如果你在本快速入门中使用快照功能迁移了数据,则还需要删除保存到目标订阅的 PersonGroupIf you migrated data using the Snapshot feature in this quickstart, you'll also need to delete the PersonGroup saved to the target subscription.

后续步骤Next steps

在本快速入门中,你已了解如何使用适用于 Go 的人脸库来执行基本任务。In this quickstart, you learned how to use the Face library for Go to do basis tasks. 接下来,请在参考文档中详细了解该库。Next, explore the reference documentation to learn more about the library.