教程:创建一个用于检测和定格图像中人脸的 Android 应用Tutorial: Create an Android app to detect and frame faces in an image

在本教程中,你将创建一个简单的 Android 应用程序,它使用 Azure 人脸 API 通过 Java SDK 来检测图像中的人脸。In this tutorial, you will create a simple Android application that uses the Azure Face API, through the Java SDK, to detect human faces in an image. 此应用程序显示一个选定的图像,然后围绕每个检测到的人脸绘制一个框。The application displays a selected image and draws a frame around each detected face.

本教程演示如何:This tutorial shows you how to:

  • 创建 Android 应用程序Create an Android application
  • 安装人脸 API 客户端库Install the Face API client library
  • 使用客户端库检测图像中的人脸Use the client library to detect faces in an image
  • 围绕每个检测到的人脸绘制一个框架Draw a frame around each detected face

一张照片的 Android 屏幕截图,其中的人脸已使用红色矩形定格

完整的示例代码在 GitHub 上的认知服务人脸 Android 存储库中提供。The complete sample code is available in the Cognitive Services Face Android repository on GitHub.

如果没有 Azure 订阅,请在开始前创建一个试用帐户If you don't have an Azure subscription, create a Trial Account before you begin.

先决条件Prerequisites

  • 人脸 API 订阅密钥。A Face API subscription key. 可以按照创建认知服务帐户中的说明订阅人脸 API 服务并获取密钥。You can follow the instructions in Create a Cognitive Services account to subscribe to the Face API service and get your key.
  • Android Studio,装有 API 22 或更高级别(这是人脸客户端库需要的)。Android Studio with API level 22 or later (required by the Face client library).

创建 Android Studio 项目Create the Android Studio project

执行以下步骤,以便创建新的 Android 应用程序项目。Follow these steps to create a new Android application project.

  1. 在 Android Studio 中,选择“启动新的 Android Studio 项目”。In Android Studio, select Start a new Android Studio project.
  2. 在“创建 Android 项目”屏幕上根据需要修改默认字段,然后单击“下一步”。On the Create Android Project screen, modify the default fields, if necessary, then click Next.
  3. 在“目标 Android 设备”屏幕上使用下拉列表选择器选择“API 22”或更高版本,然后单击“下一步”。On the Target Android Devices screen, use the dropdown selector to choose API 22 or later, then click Next.
  4. 选择“空活动”,然后单击“下一步”。Select Empty Activity, then click Next.
  5. 取消选中“后向兼容性”,然后单击“完成”。Uncheck Backwards Compatibility, then click Finish.

添加初始代码Add the initial code

创建 UICreate the UI

打开 activity_main.xmlOpen activity_main.xml. 在布局编辑器中,选择“文本”选项卡,然后将其中的内容替换为以下代码。In the Layout Editor, select the Text tab, then replace the contents with the following code.

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools" tools:context=".MainActivity"
    android:layout_width="match_parent" android:layout_height="match_parent">

    <ImageView
        android:layout_width="match_parent"
        android:layout_height="fill_parent"
        android:id="@+id/imageView1"
        android:layout_above="@+id/button1"
        android:contentDescription="Image with faces to analyze"/>

    <Button
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:text="Browse for a face image"
        android:id="@+id/button1"
        android:layout_alignParentBottom="true"/>
</RelativeLayout>

创建 main 类Create the main class

打开 MainActivity.java,将现有 import 语句替换为以下代码。Open MainActivity.java and replace the existing import statements with the following code.

import java.io.*;
import android.app.*;
import android.content.*;
import android.net.*;
import android.os.*;
import android.view.*;
import android.graphics.*;
import android.widget.*;
import android.provider.*;

然后,将 MainActivity 类的内容替换为以下代码。Then, replace the contents of the MainActivity class with the following code. 这样会在“按钮”上设置一个事件处理程序,以便启动一项让用户选择图片的新活动。This creates an event handler on the Button that starts a new activity to allow the user to select a picture. 它会在 ImageView 中显示图片。It displays the picture in the ImageView.

private final int PICK_IMAGE = 1;
private ProgressDialog detectionProgressDialog;

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    Button button1 = findViewById(R.id.button1);
    button1.setOnClickListener(new View.OnClickListener() {
        @Override
        public void onClick(View v) {
            Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
            intent.setType("image/*");
            startActivityForResult(Intent.createChooser(
                    intent, "Select Picture"), PICK_IMAGE);
        }
    });

    detectionProgressDialog = new ProgressDialog(this);
}

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    if (requestCode == PICK_IMAGE && resultCode == RESULT_OK &&
            data != null && data.getData() != null) {
        Uri uri = data.getData();
        try {
            Bitmap bitmap = MediaStore.Images.Media.getBitmap(
                    getContentResolver(), uri);
            ImageView imageView = findViewById(R.id.imageView1);
            imageView.setImageBitmap(bitmap);

            // Comment out for tutorial
            detectAndFrame(bitmap);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

试用应用Try the app

onActivityResult 方法中注释掉对 detectAndFrame 的调用。Comment out the call to detectAndFrame in the onActivityResult method. 然后按菜单上的“运行”,对应用进行测试。Then press Run on the menu to test your app. 当应用打开后,在模拟器或连接的设备中单击底部的“浏览”。When the app opens, either in an emulator or a connected device, click the Browse on the bottom. 此时会显示设备的文件选择对话框。The device's file selection dialog should appear. 选择一个图像,验证它是否显示在窗口中。Choose an image and verify that it displays in the window. 然后,关闭应用并转到下一步。Then, close the app and advance to the next step.

一张包含人脸的照片的 Android 屏幕截图

添加人脸 SDKAdd the Face SDK

添加 Gradle 依赖项Add the Gradle dependency

在“项目”窗格中,使用下拉列表选择器选择“Android”。In the Project pane, use the dropdown selector to select Android. 展开“Gradle 脚本”,然后打开“build.gradle (模块: 应用)”。Expand Gradle Scripts, then open build.gradle (Module: app). 为人脸客户端库 com.microsoft.projectoxford:face:1.4.3 添加一个依赖项,如以下屏幕截图所示,然后单击“立即同步”。Add a dependency for the Face client library, com.microsoft.projectoxford:face:1.4.3, as shown in the screenshot below, then click Sync Now.

应用 build.gradle 文件的 Android Studio 屏幕截图

返回到 MainActivity.java,添加以下 import 语句:Go back to MainActivity.java and add the following import statements:

import com.microsoft.projectoxford.face.*;
import com.microsoft.projectoxford.face.contract.*;

然后,在 MainActivity 类中 onCreate 方法上面插入以下代码:Then, insert the following code in the MainActivity class, above the onCreate method:

private final String apiEndpoint = "https://api.cognitive.azure.cn/face/v1.0"

// Replace `<Subscription Key>` with your subscription key.
// For example, subscriptionKey = "0123456789abcdef0123456789ABCDEF"
private final String subscriptionKey = "<Subscription Key>";

private final FaceServiceClient faceServiceClient =
        new FaceServiceRestClient(apiEndpoint, subscriptionKey);

需将 <Subscription Key> 替换为订阅密钥。You will need to replace <Subscription Key> with your subscription key.

在“项目”窗格中展开“应用”,接着展开“清单”,然后打开 AndroidManifest.xmlIn the Project pane, expand app, then manifests, and open AndroidManifest.xml. 插入以下元素作为 manifest 元素的直接子级:Insert the following element as a direct child of the manifest element:

<uses-permission android:name="android.permission.INTERNET" />

上传图像并检测人脸Upload image and detect faces

应用会通过调用 FaceServiceClient.detect 方法来检测人脸,该方法可包装检测 REST API 并返回人脸实例的列表。Your app will detect faces by calling the FaceServiceClient.detect method, which wraps the Detect REST API and returns a list of Face instances.

返回的每张人脸均包含一个指示其位置的矩形,以及一系列可选的人脸特性。Each returned Face includes a rectangle to indicate its location, combined with a series of optional face attributes. 在本示例中,只请求了人脸矩形。In this example, only the face rectangles are requested.

将以下两个方法插入到 MainActivity 类中。Insert the following two methods into the MainActivity class. 请注意,当人脸检测完成后,应用会调用 drawFaceRectanglesOnBitmap 方法,以便修改 ImageViewNote that when face detection completes, the app calls the drawFaceRectanglesOnBitmap method to modify the ImageView. 接下来将定义此方法。You will define this method next.

// Detect faces by uploading a face image.
// Frame faces after detection.
private void detectAndFrame(final Bitmap imageBitmap) {
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    imageBitmap.compress(Bitmap.CompressFormat.JPEG, 100, outputStream);
    ByteArrayInputStream inputStream =
            new ByteArrayInputStream(outputStream.toByteArray());

    AsyncTask<InputStream, String, Face[]> detectTask =
            new AsyncTask<InputStream, String, Face[]>() {
                String exceptionMessage = "";

                @Override
                protected Face[] doInBackground(InputStream... params) {
                    try {
                        publishProgress("Detecting...");
                        Face[] result = faceServiceClient.detect(
                                params[0],
                                true,         // returnFaceId
                                false,        // returnFaceLandmarks
                                null          // returnFaceAttributes:
                                /* new FaceServiceClient.FaceAttributeType[] {
                                    FaceServiceClient.FaceAttributeType.Age,
                                    FaceServiceClient.FaceAttributeType.Gender }
                                */
                        );
                        if (result == null){
                            publishProgress(
                                    "Detection Finished. Nothing detected");
                            return null;
                        }
                        publishProgress(String.format(
                                "Detection Finished. %d face(s) detected",
                                result.length));
                        return result;
                    } catch (Exception e) {
                        exceptionMessage = String.format(
                                "Detection failed: %s", e.getMessage());
                        return null;
                    }
                }

                @Override
                protected void onPreExecute() {
                    //TODO: show progress dialog
                    detectionProgressDialog.show();
                }
                @Override
                protected void onProgressUpdate(String... progress) {
                    //TODO: update progress
                    detectionProgressDialog.setMessage(progress[0]);
                }
                @Override
                protected void onPostExecute(Face[] result) {
                    //TODO: update face frames
                    detectionProgressDialog.dismiss();

                    if(!exceptionMessage.equals("")){
                        showError(exceptionMessage);
                    }
                    if (result == null) return;

                    ImageView imageView = findViewById(R.id.imageView1);
                    imageView.setImageBitmap(
                            drawFaceRectanglesOnBitmap(imageBitmap, result));
                    imageBitmap.recycle();
                }
            };

    detectTask.execute(inputStream);
}

private void showError(String message) {
    new AlertDialog.Builder(this)
            .setTitle("Error")
            .setMessage(message)
            .setPositiveButton("OK", new DialogInterface.OnClickListener() {
                public void onClick(DialogInterface dialog, int id) {
                }})
            .create().show();
}

绘制人脸矩形Draw face rectangles

将以下帮助程序方法插入到 MainActivity 类中。Insert the following helper method into the MainActivity class. 此方法围绕每个检测到的人脸绘制一个矩形,使用每个人脸实例的矩形坐标。This method draws a rectangle around each detected face, using the rectangle coordinates of each Face instance.

private static Bitmap drawFaceRectanglesOnBitmap(
        Bitmap originalBitmap, Face[] faces) {
    Bitmap bitmap = originalBitmap.copy(Bitmap.Config.ARGB_8888, true);
    Canvas canvas = new Canvas(bitmap);
    Paint paint = new Paint();
    paint.setAntiAlias(true);
    paint.setStyle(Paint.Style.STROKE);
    paint.setColor(Color.RED);
    paint.setStrokeWidth(10);
    if (faces != null) {
        for (Face face : faces) {
            FaceRectangle faceRectangle = face.faceRectangle;
            canvas.drawRect(
                    faceRectangle.left,
                    faceRectangle.top,
                    faceRectangle.left + faceRectangle.width,
                    faceRectangle.top + faceRectangle.height,
                    paint);
        }
    }
    return bitmap;
}

最后,在 onActivityResult 中取消注释对 detectAndFrame 方法的调用。Finally, uncomment the call to the detectAndFrame method in onActivityResult.

运行应用程序Run the app

运行此应用程序,以浏览方式查找包含人脸的图像。Run the application and browse for an image with a face. 等待几秒钟,以便人脸服务响应。Wait a few seconds to allow the Face service to respond. 此时会在图像中的每个人脸上看到一个红色矩形。You should see a red rectangle on each of the faces in the image.

人脸的 Android 屏幕截图,围绕这些人脸绘制了红色矩形

后续步骤Next steps

本教程介绍了人脸 API Java SDK 的基本使用过程,并创建了一个应用程序来检测并定格图像中的人脸。In this tutorial, you learned the basic process for using the Face API Java SDK and created an application to detect and frame faces in an image. 接下来,请深入了解人脸检测的详细信息。Next, learn more about the details of face detection.