Quickstart: Image Analysis

Get started with the Image Analysis REST API or client libraries to set up a basic image tagging script. The Analyze Image service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code.

Use the Image Analysis client library for C# to analyze an image for content tags. This quickstart defines a method, AnalyzeImageUrl, which uses the client object to analyze a remote image and print the results.

Reference documentation | Library source code | Package (NuGet) | Samples

Tip

You can also analyze a local image. See the ComputerVisionClient methods, such as AnalyzeImageInStreamAsync. Or, see the sample code on GitHub for scenarios involving local images.

Tip

The Analyze API can do many different operations other than generate image tags. See the Image Analysis how-to guide for examples that showcase all of the available features.

Prerequisites

  • An Azure subscription - Create one for trial
  • The Visual Studio IDE or current version of .NET Core.
  • Once you have your Azure subscription, create a Computer Vision resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You need the key and endpoint from the resource you create to connect your application to the Azure AI Vision service.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

Tip

Don't include the key directly in your code, and never post it publicly. See the Azure AI services security article for more authentication options like Azure Key Vault.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  1. To set the VISION_KEY environment variable, replace your-key with one of the keys for your resource.
  2. To set the VISION_ENDPOINT environment variable, replace your-endpoint with the endpoint for your resource.
setx VISION_KEY your-key
setx VISION_ENDPOINT your-endpoint

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Analyze image

  1. Create a new C# application.

    Using Visual Studio, create a new .NET Core application.

    Install the client library

    Once you've created a new project, install the client library by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. In the package manager that opens select Browse, check Include prerelease, and search for Microsoft.Azure.CognitiveServices.Vision.ComputerVision. Select version 7.0.0, and then Install.

  2. From the project directory, open the Program.cs file in your preferred editor or IDE. Paste in the following code:

using System;
using System.Collections.Generic;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using System.Threading.Tasks;
using System.IO;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System.Threading;
using System.Linq;

namespace ComputerVisionQuickstart
{
    class Program
    {
        // Add your Computer Vision key and endpoint
        static string key = Environment.GetEnvironmentVariable("VISION_KEY");
        static string endpoint = Environment.GetEnvironmentVariable("VISION_ENDPOINT");

        // URL image used for analyzing an image (image of puppy)
        private const string ANALYZE_URL_IMAGE = "https://moderatorsampleimages.blob.core.chinacloudapi.cn/samples/sample16.png";

        static void Main(string[] args)
        {
            Console.WriteLine("Azure Cognitive Services Computer Vision - .NET quickstart example");
            Console.WriteLine();

            // Create a client
            ComputerVisionClient client = Authenticate(endpoint, key);

            // Analyze an image to get features and other properties.
            AnalyzeImageUrl(client, ANALYZE_URL_IMAGE).Wait();
        }

        /*
         * AUTHENTICATE
         * Creates a Computer Vision client used by each example.
         */
        public static ComputerVisionClient Authenticate(string endpoint, string key)
        {
            ComputerVisionClient client =
              new ComputerVisionClient(new ApiKeyServiceClientCredentials(key))
              { Endpoint = endpoint };
            return client;
        }
       
        public static async Task AnalyzeImageUrl(ComputerVisionClient client, string imageUrl)
        {
            Console.WriteLine("----------------------------------------------------------");
            Console.WriteLine("ANALYZE IMAGE - URL");
            Console.WriteLine();

            // Creating a list that defines the features to be extracted from the image. 

            List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>()
            {
                VisualFeatureTypes.Tags
            };

            Console.WriteLine($"Analyzing the image {Path.GetFileName(imageUrl)}...");
            Console.WriteLine();
            // Analyze the URL image 
            ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features);

            // Image tags and their confidence score
            Console.WriteLine("Tags:");
            foreach (var tag in results.Tags)
            {
                Console.WriteLine($"{tag.Name} {tag.Confidence}");
            }
            Console.WriteLine();
        }
    }
}

Important

Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like Azure Key Vault. See the Azure AI services security article for more information.

  1. Run the application

    Run the application by clicking the Debug button at the top of the IDE window.


Output

----------------------------------------------------------
ANALYZE IMAGE - URL

Analyzing the image sample16.png...

Tags:
grass 0.9957543611526489
dog 0.9939157962799072
mammal 0.9928356409072876
animal 0.9918001890182495
dog breed 0.9890419244766235
pet 0.974603533744812
outdoor 0.969241738319397
companion dog 0.906731367111206
small greek domestic dog 0.8965123891830444
golden retriever 0.8877675533294678
labrador retriever 0.8746421337127686
puppy 0.872604250907898
ancient dog breeds 0.8508287668228149
field 0.8017748594284058
retriever 0.6837497353553772
brown 0.6581960916519165

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to install the Image Analysis client library and make basic image analysis calls. Next, learn more about the Analyze API features.

Use the Image Analysis client library for Python to analyze a remote image for content tags.

Tip

You can also analyze a local image. See the ComputerVisionClientOperationsMixin methods, such as analyze_image_in_stream. Or, see the sample code on GitHub for scenarios involving local images.

Tip

The Analyze API can do many different operations other than generate image tags. See the Image Analysis how-to guide for examples that showcase all of the available features.

Reference documentation | Library source code | Package (PiPy) | Samples

Prerequisites

  • An Azure subscription - Create one for trial

  • Python 3.x

    • Your Python installation should include pip. You can check if you have pip installed by running pip --version on the command line. Get pip by installing the latest version of Python.
  • Once you have your Azure subscription, create a Vision resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.

    • You need the key and endpoint from the resource you create to connect your application to the Azure AI Vision service.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

Tip

Don't include the key directly in your code, and never post it publicly. See the Azure AI services security article for more authentication options like Azure Key Vault.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  1. To set the VISION_KEY environment variable, replace your-key with one of the keys for your resource.
  2. To set the VISION_ENDPOINT environment variable, replace your-endpoint with the endpoint for your resource.
setx VISION_KEY your-key
setx VISION_ENDPOINT your-endpoint

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Analyze image

  1. Install the client library.

    You can install the client library with:

    pip install --upgrade azure-cognitiveservices-vision-computervision
    

    Also install the Pillow library.

    pip install pillow
    
  2. Create a new Python application.

    Create a new Python file—quickstart-file.py, for example.

  3. Open quickstart-file.py in a text editor or IDE and paste in the following code.

from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes
from azure.cognitiveservices.vision.computervision.models import VisualFeatureTypes
from msrest.authentication import CognitiveServicesCredentials

from array import array
import os
from PIL import Image
import sys
import time

'''
Authenticate
Authenticates your credentials and creates a client.
'''
subscription_key = os.environ["VISION_KEY"]
endpoint = os.environ["VISION_ENDPOINT"]

computervision_client = ComputerVisionClient(endpoint, CognitiveServicesCredentials(subscription_key))
'''
END - Authenticate
'''

'''
Quickstart variables
These variables are shared by several examples
'''
# Images used for the examples: Describe an image, Categorize an image, Tag an image, 
# Detect faces, Detect adult or racy content, Detect the color scheme, 
# Detect domain-specific content, Detect image types, Detect objects
images_folder = os.path.join (os.path.dirname(os.path.abspath(__file__)), "images")
remote_image_url = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/landmark.jpg"
'''
END - Quickstart variables
'''


'''
Tag an Image - remote
This example returns a tag (key word) for each thing in the image.
'''
print("===== Tag an image - remote =====")
# Call API with remote image
tags_result_remote = computervision_client.tag_image(remote_image_url )

# Print results with confidence score
print("Tags in the remote image: ")
if (len(tags_result_remote.tags) == 0):
    print("No tags detected.")
else:
    for tag in tags_result_remote.tags:
        print("'{}' with confidence {:.2f}%".format(tag.name, tag.confidence * 100))
print()
'''
END - Tag an Image - remote
'''
print("End of Computer Vision quickstart.")
  1. Run the application with the python command on your quickstart file.

    python quickstart-file.py
    

Output

===== Tag an image - remote =====
Tags in the remote image:
'outdoor' with confidence 99.00%
'building' with confidence 98.81%
'sky' with confidence 98.21%
'stadium' with confidence 98.17%
'ancient rome' with confidence 96.16%
'ruins' with confidence 95.04%
'amphitheatre' with confidence 93.99%
'ancient roman architecture' with confidence 92.65%
'historic site' with confidence 89.55%
'ancient history' with confidence 89.54%
'history' with confidence 86.72%
'archaeological site' with confidence 84.41%
'travel' with confidence 65.85%
'large' with confidence 61.02%
'city' with confidence 56.57%

End of Azure AI Vision quickstart.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to install the Image Analysis client library and make basic image analysis calls. Next, learn more about the Analyze API features.

Use the Image Analysis client library to analyze a remote image for tags, text description, faces, adult content, and more.

Tip

You can also analyze a local image. See the ComputerVision methods, such as AnalyzeImage. Or, see the sample code on GitHub for scenarios involving local images.

Tip

The Analyze API can do many different operations other than generate image tags. See the Image Analysis how-to guide for examples that showcase all of the available features.

Reference documentation | Library source code |Artifact (Maven) | Samples

Prerequisites

  • An Azure subscription - Create one for trial
  • The current version of the Java Development Kit (JDK)
  • The Gradle build tool, or another dependency manager.
  • Once you have your Azure subscription, create a Vision resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You need the key and endpoint from the resource you create to connect your application to the Azure AI Vision service.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

Tip

Don't include the key directly in your code, and never post it publicly. See the Azure AI services security article for more authentication options like Azure Key Vault.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  1. To set the VISION_KEY environment variable, replace your-key with one of the keys for your resource.
  2. To set the VISION_ENDPOINT environment variable, replace your-endpoint with the endpoint for your resource.
setx VISION_KEY your-key
setx VISION_ENDPOINT your-endpoint

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Analyze image

  1. Create a new Gradle project.

    In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

    mkdir myapp && cd myapp
    

    Run the gradle init command from your working directory. This command will create essential build files for Gradle, including build.gradle.kts, which is used at runtime to create and configure your application.

    gradle init --type basic
    

    When prompted to choose a DSL, select Kotlin.

  2. Install the client library.

    This quickstart uses the Gradle dependency manager. You can find the client library and information for other dependency managers on the Maven Central Repository.

    Locate build.gradle.kts and open it with your preferred IDE or text editor. Then copy in the following build configuration. This configuration defines the project as a Java application whose entry point is the class ImageAnalysisQuickstart. It imports the Azure AI Vision library.

    plugins {
        java
        application
    }
    application { 
        mainClass.set("ImageAnalysisQuickstart")
    }
    repositories {
        mavenCentral()
    }
    dependencies {
        implementation(group = "com.microsoft.azure.cognitiveservices", name = "azure-cognitiveservices-computervision", version = "1.0.9-beta")
    }
    
  3. Create a Java file.

    From your working directory, run the following command to create a project source folder:

    mkdir -p src/main/java
    

    Navigate to the new folder and create a file called ImageAnalysisQuickstart.java.

  4. Open ImageAnalysisQuickstart.java in your preferred editor or IDE and paste in the following code.

import com.microsoft.azure.cognitiveservices.vision.computervision.*;
import com.microsoft.azure.cognitiveservices.vision.computervision.implementation.ComputerVisionImpl;
import com.microsoft.azure.cognitiveservices.vision.computervision.models.*;

import java.io.*;
import java.nio.file.Files;

import java.util.ArrayList;
import java.util.List;
import java.util.UUID;

public class ImageAnalysisQuickstart {

    // Use environment variables
    static String key = System.getenv("VISION_KEY");
    static String endpoint = System.getenv("VISION_ENDPOINT");

    public static void main(String[] args) {
        
        System.out.println("\nAzure Cognitive Services Computer Vision - Java Quickstart Sample");

        // Create an authenticated Computer Vision client.
        ComputerVisionClient compVisClient = Authenticate(key, endpoint); 

        // Analyze local and remote images
        AnalyzeRemoteImage(compVisClient);

    }

    public static ComputerVisionClient Authenticate(String key, String endpoint){
        return ComputerVisionManager.authenticate(key).withEndpoint(endpoint);
    }


    public static void AnalyzeRemoteImage(ComputerVisionClient compVisClient) {
        /*
         * Analyze an image from a URL:
         *
         * Set a string variable equal to the path of a remote image.
         */
        String pathToRemoteImage = "https://github.com/Azure-Samples/cognitive-services-sample-data-files/raw/master/ComputerVision/Images/faces.jpg";

        // This list defines the features to be extracted from the image.
        List<VisualFeatureTypes> featuresToExtractFromRemoteImage = new ArrayList<>();
        featuresToExtractFromRemoteImage.add(VisualFeatureTypes.TAGS);

        System.out.println("\n\nAnalyzing an image from a URL ...");

        try {
            // Call the Computer Vision service and tell it to analyze the loaded image.
            ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
                    .withVisualFeatures(featuresToExtractFromRemoteImage).execute();


            // Display image tags and confidence values.
            System.out.println("\nTags: ");
            for (ImageTag tag : analysis.tags()) {
                System.out.printf("\'%s\' with confidence %f\n", tag.name(), tag.confidence());
            }
        }

        catch (Exception e) {
            System.out.println(e.getMessage());
            e.printStackTrace();
        }
    }
    // END - Analyze an image from a URL.

}
  1. Navigate back to the project root folder, and build the app with:

    gradle build
    

    Then, run it with the gradle run command:

    gradle run
    

Output

Azure AI Vision - Java Quickstart Sample

Analyzing an image from a URL ...

Tags:
'person' with confidence 0.998895
'human face' with confidence 0.997437
'smile' with confidence 0.991973
'outdoor' with confidence 0.985962
'happy' with confidence 0.969785
'clothing' with confidence 0.961570
'friendship' with confidence 0.946441
'tree' with confidence 0.917331
'female person' with confidence 0.890976
'girl' with confidence 0.888741
'social group' with confidence 0.872044
'posing' with confidence 0.865493
'adolescent' with confidence 0.857371
'love' with confidence 0.852553
'laugh' with confidence 0.850097
'people' with confidence 0.849922
'lady' with confidence 0.844540
'woman' with confidence 0.818172
'group' with confidence 0.792975
'wedding' with confidence 0.615252
'dress' with confidence 0.517169

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to install the Image Analysis client library and make basic image analysis calls. Next, learn more about the Analyze API features.

Use the Image Analysis client library for JavaScript to analyze a remote image for content tags.

Tip

You can also analyze a local image. See the ComputerVisionClient methods, such as describeImageInStream. Or, see the sample code on GitHub for scenarios involving local images.

Tip

The Analyze API can do many different operations other than generate image tags. See the Image Analysis how-to guide for examples that showcase all of the available features.

Reference documentation | Library source code | Package (npm) | Samples

Prerequisites

  • An Azure subscription - Create one for trial
  • The current version of Node.js
  • Once you have your Azure subscription, create a Vision resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You need the key and endpoint from the resource you create to connect your application to the Azure AI Vision service.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

Tip

Don't include the key directly in your code, and never post it publicly. See the Azure AI services security article for more authentication options like Azure Key Vault.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  1. To set the VISION_KEY environment variable, replace your-key with one of the keys for your resource.
  2. To set the VISION_ENDPOINT environment variable, replace your-endpoint with the endpoint for your resource.
setx VISION_KEY your-key
setx VISION_ENDPOINT your-endpoint

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Analyze image

  1. Create a new Node.js application

    In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

    mkdir myapp && cd myapp
    

    Run the npm init command to create a node application with a package.json file.

    npm init
    

    Install the client library

    Install the ms-rest-azure and @azure/cognitiveservices-computervision npm package:

    npm install @azure/cognitiveservices-computervision
    

    Also install the async module:

    npm install async
    

    Your app's package.json file will be updated with the dependencies.

    Create a new file, index.js.

  2. Open index.js in a text editor and paste in the following code.

'use strict';

const async = require('async');
const fs = require('fs');
const https = require('https');
const path = require("path");
const createReadStream = require('fs').createReadStream
const sleep = require('util').promisify(setTimeout);
const ComputerVisionClient = require('@azure/cognitiveservices-computervision').ComputerVisionClient;
const ApiKeyCredentials = require('@azure/ms-rest-js').ApiKeyCredentials;

/**
 * AUTHENTICATE
 * This single client is used for all examples.
 */
const key = process.env.VISION_KEY;
const endpoint = process.env.VISION_ENDPOINT;


const computerVisionClient = new ComputerVisionClient(
  new ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } }), endpoint);
/**
 * END - Authenticate
 */


function computerVision() {
  async.series([
    async function () {

      /**
       * DETECT TAGS  
       * Detects tags for an image, which returns:
       *     all objects in image and confidence score.
       */
      console.log('-------------------------------------------------');
      console.log('DETECT TAGS');
      console.log();

      // Image of different kind of dog.
      const tagsURL = 'https://moderatorsampleimages.blob.core.chinacloudapi.cn/samples/sample16.png';

      // Analyze URL image
      console.log('Analyzing tags in image...', tagsURL.split('/').pop());
      const tags = (await computerVisionClient.analyzeImage(tagsURL, { visualFeatures: ['Tags'] })).tags;
      console.log(`Tags: ${formatTags(tags)}`);

      // Format tags for display
      function formatTags(tags) {
        return tags.map(tag => (`${tag.name} (${tag.confidence.toFixed(2)})`)).join(', ');
      }
      /**
       * END - Detect Tags
       */
      console.log();
      console.log('-------------------------------------------------');
      console.log('End of quickstart.');

    },
    function () {
      return new Promise((resolve) => {
        resolve();
      })
    }
  ], (err) => {
    throw (err);
  });
}

computerVision();
  1. Run the application with the node command on your quickstart file.

    node index.js
    

Output

-------------------------------------------------
DETECT TAGS

Analyzing tags in image... sample16.png
Tags: grass (1.00), dog (0.99), mammal (0.99), animal (0.99), dog breed (0.99), pet (0.97), outdoor (0.97), companion dog (0.91), small greek domestic dog (0.90), golden retriever (0.89), labrador retriever (0.87), puppy (0.87), ancient dog breeds (0.85), field (0.80), retriever (0.68), brown (0.66)

-------------------------------------------------
End of quickstart.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to install the Image Analysis client library and make basic image analysis calls. Next, learn more about the Analyze API features.

Use the Image Analysis REST API to analyze an image for tags.

Tip

The Analyze API can do many different operations other than generate image tags. See the Image Analysis how-to guide for examples that showcase all of the available features.

Note

This quickstart uses cURL commands to call the REST API. You can also call the REST API using a programming language. See the GitHub samples for examples in C#, Python, Java, and JavaScript.

Prerequisites

  • An Azure subscription - Create one for trial
  • Once you have your Azure subscription, create a Vision resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You'll need the key and endpoint from the resource you create to connect your application to the Azure AI Vision service. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • cURL installed

Analyze an image

To analyze an image for various visual features, do the following steps:

  1. Copy the following command into a text editor.

    curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://chinaeast2.api.cognitive.azure.cn/vision/v3.2/analyze?visualFeatures=Tags" -d "{'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png'}"
    
  2. Make the following changes in the command where needed:

    1. Replace the value of <subscriptionKey> with your key.
    2. Replace the first part of the request URL (chinaeast2) with the text in your own endpoint URL.

      Note

      New resources created after July 1, 2019, will use custom subdomain names. For more information and a complete list of regional endpoints, see Custom subdomain names for Azure AI services.

    3. Optionally, change the image URL in the request body (https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png) to the URL of a different image to be analyzed.
  3. Open a command prompt window.

  4. Paste your edited curl command from the text editor into the command prompt window, and then run the command.

Examine the response

A successful response is returned in JSON. The sample application parses and displays a successful response in the command prompt window, similar to the following example:

{{
   "tags":[
      {
         "name":"text",
         "confidence":0.9992657899856567
      },
      {
         "name":"post-it note",
         "confidence":0.9879657626152039
      },
      {
         "name":"handwriting",
         "confidence":0.9730165004730225
      },
      {
         "name":"rectangle",
         "confidence":0.8658561706542969
      },
      {
         "name":"paper product",
         "confidence":0.8561884760856628
      },
      {
         "name":"purple",
         "confidence":0.5961999297142029
      }
   ],
   "requestId":"2788adfc-8cfb-43a5-8fd6-b3a9ced35db2",
   "metadata":{
      "height":945,
      "width":1000,
      "format":"Jpeg"
   },
   "modelVersion":"2021-05-01"
}

Next steps

In this quickstart, you learned how to make basic image analysis calls using the REST API. Next, learn more about the Analyze API features.