- 
			
2025-09-15  
本文演示了如何调用图像分析 3.2 版 API 以返回有关图像的视觉特征的信息。 它还演示如何使用客户端 SDK 或 REST API 分析返回的信息。
本指南假设你已经创建视觉资源并获取密钥和终结点 URL。 如果使用客户端 SDK,则还需要对客户端对象进行身份验证。 有关这些步骤的详细信息,请参阅图像分析快速入门。
提交服务数据
本指南中的代码使用 URL 引用的远程图像。 你可能要自行尝试不同的图像,以了解图像分析功能的完整功能。
分析远程图像时,请通过设置请求正文的格式来指定图像的 URL,如下所示:{"url":"http://example.com/images/test.jpg"}。
若要分析本地图像,请将二进制图像数据放在 HTTP 请求正文中。
在主类中,保存对要分析的图像的 URL 的引用。
        // URL image used for analyzing an image (image of puppy)
        private const string ANALYZE_URL_IMAGE = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/refs/heads/master/ComputerVision/Images/dog.jpg";
若要分析本地图像,请参阅 ComputerVisionClient 方法,例如 AnalyzeImageInStreamAsync。 或者,请参阅 GitHub 上的示例代码,了解涉及本地图像的方案。
在主类中,保存对要分析的图像的 URL 的引用。
        String pathToRemoteImage = "https://github.com/Azure-Samples/cognitive-services-sample-data-files/raw/master/ComputerVision/Images/faces.jpg";
若要分析本地图像,请参阅 ComputerVision 方法,例如 AnalyzeImage。 或者,请参阅 GitHub 上的示例代码,了解涉及本地图像的方案。
在主函数中,保存对要分析的图像的 URL 的引用。
      const describeURL = 'https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/celebrities.jpg';
若要分析本地图像,请参阅 ComputerVisionClient 方法,例如 describeImageInStream。 或者,请参阅 GitHub 上的示例代码,了解涉及本地图像的方案。
保存对要分析的图像的 URL 的引用。
remote_image_url = "https://moderatorsampleimages.blob.core.chinacloudapi.cn/samples/sample16.png"
若要分析本地图像,请参阅 ComputerVisionClientOperationsMixin 方法,例如 analyze_image_in_stream。 或者,请参阅 GitHub 上的示例代码,了解涉及本地图像的方案。
确定如何处理数据
选择视觉特征
使用分析 API 可以访问所有服务的图像分析特征。 基于自己的用例选择要执行的操作。 有关每项功能的说明,请参阅 Azure AI 视觉概述。 以下部分中的示例添加了所有可用的视觉功能,但对于实际用途,可能只需要一两个。
可以通过设置分析 API 的 URL 查询参数来指定要使用的特征。 参数可以具有多个值(用逗号分隔)。 你指定的每个功能都需要更多的计算时间,因此只需指定所需的内容。
| URL 参数 | 值 | 说明 | 
|---|---|---|
features | 
Read | 
读取图像中的可见文本并将其输出为结构化 JSON 数据 | 
features | 
Description | 
使用支持的语言使用完整句子描述图像内容 | 
features | 
SmartCrops | 
查找将图像裁剪为所需纵横比的矩形坐标,同时保留感兴趣的区域 | 
features | 
Objects | 
检测图像中的各种对象,包括大致位置。 该 Objects 参数仅在英语中可用 | 
features | 
Tags | 
使用与图像内容相关的字词详细列表标记图像 | 
填充的 URL 可能如下所示:
<endpoint>/vision/v3.2/analyze?visualFeatures=Tags
定义新的图像分析方法。 添加以下代码,该代码指定要在分析中提取的视觉特征。 有关完整列表,请参阅 VisualFeatureTypes 枚举。
        /* 
         * ANALYZE IMAGE - URL IMAGE
         * Analyze URL image. Extracts captions, categories, tags, objects, faces, racy/adult/gory content,
         * brands, celebrities, landmarks, color scheme, and image types.
         */
        public static async Task AnalyzeImageUrl(ComputerVisionClient client, string imageUrl)
        {
            Console.WriteLine("----------------------------------------------------------");
            Console.WriteLine("ANALYZE IMAGE - URL");
            Console.WriteLine();
            // Creating a list that defines the features to be extracted from the image. 
            List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>()
            {
                VisualFeatureTypes.Categories, VisualFeatureTypes.Description,
                VisualFeatureTypes.Faces, VisualFeatureTypes.ImageType,
                VisualFeatureTypes.Tags, VisualFeatureTypes.Adult,
                VisualFeatureTypes.Color, VisualFeatureTypes.Brands,
                VisualFeatureTypes.Objects
            };
指定要在分析中提取的视觉特征。 有关完整列表,请参阅 VisualFeatureTypes 枚举。
        // This list defines the features to be extracted from the image.
        List<VisualFeatureTypes> featuresToExtractFromRemoteImage = new ArrayList<>();
        featuresToExtractFromRemoteImage.add(VisualFeatureTypes.DESCRIPTION);
        featuresToExtractFromRemoteImage.add(VisualFeatureTypes.CATEGORIES);
        featuresToExtractFromRemoteImage.add(VisualFeatureTypes.TAGS);
        featuresToExtractFromRemoteImage.add(VisualFeatureTypes.FACES);
        featuresToExtractFromRemoteImage.add(VisualFeatureTypes.ADULT);
        featuresToExtractFromRemoteImage.add(VisualFeatureTypes.COLOR);
        featuresToExtractFromRemoteImage.add(VisualFeatureTypes.IMAGE_TYPE);
指定要在分析中提取的视觉特征。 有关完整列表,请参阅 VisualFeatureTypes 枚举。
      // Get the visual feature for analysis
      const features = ['Categories','Brands','Adult','Color','Description','Faces','Image_type','Objects','Tags'];
      const domainDetails = ['Celebrities','Landmarks'];
指定要在分析中提取的视觉特征。 有关完整列表,请参阅 VisualFeatureTypes 枚举。
print("===== Analyze an image - remote =====")
# Select the visual feature(s) you want.
remote_image_features = [VisualFeatureTypes.categories,VisualFeatureTypes.brands,VisualFeatureTypes.adult,VisualFeatureTypes.color,VisualFeatureTypes.description,VisualFeatureTypes.faces,VisualFeatureTypes.image_type,VisualFeatureTypes.objects,VisualFeatureTypes.tags]
remote_image_details = [Details.celebrities,Details.landmarks]
指定语言
还可以指定返回的数据的语言。
以下 URL 查询参数指定语言。 默认值为 en。
| URL 参数 | 值 | 说明 | 
|---|---|---|
language | 
en | 
英语 | 
language | 
es | 
西班牙语 | 
language | 
ja | 
日语 | 
language | 
pt | 
葡萄牙语 | 
language | 
zh | 
简体中文 | 
填充的 URL 可能如下所示:
<endpoint>/vision/v3.2/analyze?visualFeatures=Tags&language=en
使用 AnalyzeImageAsync 调用的语言参数指定语言。
| 语言 | 值 | 
|---|---|
| 英语 | en | 
| 西班牙语 | es | 
| 日语 | ja | 
| 葡萄牙语 | pt | 
| 简体中文 | zh | 
指定语言的方法调用可能如下所示。
ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features, language: "en");
在 Analyze 调用中使用 AnalyzeImageOptionalParameter 输入指定语言。
| 语言 | 值 | 
|---|---|
| 英语 | en | 
| 西班牙语 | es | 
| 日语 | ja | 
| 葡萄牙语 | pt | 
| 简体中文 | zh | 
指定语言的方法调用可能如下所示。
ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
    .withVisualFeatures(featuresToExtractFromLocalImage)
    .language("en")
    .execute();
              language使用 Analyze 调用中的 ComputerVisionClientAnalyzeImageOptionalParams 输入的属性来指定语言。
| 语言 | 值 | 
|---|---|
| 英语 | en | 
| 西班牙语 | es | 
| 日语 | ja | 
| 葡萄牙语 | pt | 
| 简体中文 | zh | 
指定语言的方法调用可能如下所示。
const result = (await computerVisionClient.analyzeImage(imageURL,{visualFeatures: features, language: 'en'}));
              language使用analyze_image调用的参数指定语言。
| 语言 | 值 | 
|---|---|
| 英语 | en | 
| 西班牙语 | es | 
| 日语 | ja | 
| 葡萄牙语 | pt | 
| 简体中文 | zh | 
指定语言的方法调用可能如下所示。
results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features, remote_image_details, 'en')
获取服务结果
本部分演示如何分析 API 调用的结果。 它包括 API 调用本身。
注意
范围内 API 调用
可以直接或通过分析 API 调用调用图像分析中的某些功能。 例如,可以通过向 <endpoint>/vision/v3.2/tag(或是向 SDK 中的对应方法)发出请求,来执行仅限图像标记的作用域分析。 有关可单独调用的其他特征,请参阅参考文档。
服务返回 200 HTTP 响应,正文包含 JSON 字符串形式的返回数据。 以下文本是一个 JSON 响应示例。
{
    "metadata":
    {
        "width": 300,
        "height": 200
    },
    "tagsResult":
    {
        "values":
        [
            {
                "name": "grass",
                "confidence": 0.9960499405860901
            },
            {
                "name": "outdoor",
                "confidence": 0.9956876635551453
            },
            {
                "name": "building",
                "confidence": 0.9893627166748047
            },
            {
                "name": "property",
                "confidence": 0.9853052496910095
            },
            {
                "name": "plant",
                "confidence": 0.9791355729103088
            }
        ]
    }
}
错误代码
请参阅以下可能出现的错误及其原因的列表:
- 400 
- 
              
InvalidImageUrl- 图像 URL 格式不正确或无法访问 - 
              
InvalidImageFormat- 输入数据不是有效的图像 - 
              
InvalidImageSize- 输入图像太大 - 
              
NotSupportedVisualFeature- 指定的功能类型无效 - 
              
NotSupportedImage- 不支持的图像,例如儿童色情 - 
              
InvalidDetails- 不支持detail的参数值 - 
              
NotSupportedLanguage- 所请求的作在指定的语言中不受支持 - 
              
BadArgument- 错误消息中提供了更多详细信息 
 - 
              
 - 415 - 不支持的媒体类型。 Content-Type 类型不在允许的类型中:
- 对于图像 URL,Content-Type 应为 
application/json - 对于二进制图像数据,Content-Type 应为 
application/octet-stream或multipart/form-data 
 - 对于图像 URL,Content-Type 应为 
 - 500 
FailedToProcess- 
              
Timeout- 图像处理超时 InternalServerError
 
以下代码调用图像分析 API 并将结果输出到控制台。
            // Analyze the URL image 
            ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features);
            // Summarizes the image content.
            Console.WriteLine("Summary:");
            foreach (var caption in results.Description.Captions)
            {
                Console.WriteLine($"{caption.Text} with confidence {caption.Confidence}");
            }
            Console.WriteLine();
            // Display categories the image is divided into.
            Console.WriteLine("Categories:");
            foreach (var category in results.Categories)
            {
                Console.WriteLine($"{category.Name} with confidence {category.Score}");
            }
            Console.WriteLine();
            // Image tags and their confidence score
            Console.WriteLine("Tags:");
            foreach (var tag in results.Tags)
            {
                Console.WriteLine($"{tag.Name} {tag.Confidence}");
            }
            Console.WriteLine();
            // Objects
            Console.WriteLine("Objects:");
            foreach (var obj in results.Objects)
            {
                Console.WriteLine($"{obj.ObjectProperty} with confidence {obj.Confidence} at location {obj.Rectangle.X}, " +
                  $"{obj.Rectangle.X + obj.Rectangle.W}, {obj.Rectangle.Y}, {obj.Rectangle.Y + obj.Rectangle.H}");
            }
            Console.WriteLine();
            // Faces
            Console.WriteLine("Faces:");
            foreach (var face in results.Faces)
            {
                Console.WriteLine($"A {face.Gender} of age {face.Age} at location {face.FaceRectangle.Left}, " +
                  $"{face.FaceRectangle.Left}, {face.FaceRectangle.Top + face.FaceRectangle.Width}, " +
                  $"{face.FaceRectangle.Top + face.FaceRectangle.Height}");
            }
            Console.WriteLine();
            // Adult or racy content, if any.
            Console.WriteLine("Adult:");
            Console.WriteLine($"Has adult content: {results.Adult.IsAdultContent} with confidence {results.Adult.AdultScore}");
            Console.WriteLine($"Has racy content: {results.Adult.IsRacyContent} with confidence {results.Adult.RacyScore}");
            Console.WriteLine($"Has gory content: {results.Adult.IsGoryContent} with confidence {results.Adult.GoreScore}");
            Console.WriteLine();
            // Well-known (or custom, if set) brands.
            Console.WriteLine("Brands:");
            foreach (var brand in results.Brands)
            {
                Console.WriteLine($"Logo of {brand.Name} with confidence {brand.Confidence} at location {brand.Rectangle.X}, " +
                  $"{brand.Rectangle.X + brand.Rectangle.W}, {brand.Rectangle.Y}, {brand.Rectangle.Y + brand.Rectangle.H}");
            }
            Console.WriteLine();
            // Celebrities in image, if any.
            Console.WriteLine("Celebrities:");
            foreach (var category in results.Categories)
            {
                if (category.Detail?.Celebrities != null)
                {
                    foreach (var celeb in category.Detail.Celebrities)
                    {
                        Console.WriteLine($"{celeb.Name} with confidence {celeb.Confidence} at location {celeb.FaceRectangle.Left}, " +
                          $"{celeb.FaceRectangle.Top}, {celeb.FaceRectangle.Height}, {celeb.FaceRectangle.Width}");
                    }
                }
            }
            Console.WriteLine();
            // Popular landmarks in image, if any.
            Console.WriteLine("Landmarks:");
            foreach (var category in results.Categories)
            {
                if (category.Detail?.Landmarks != null)
                {
                    foreach (var landmark in category.Detail.Landmarks)
                    {
                        Console.WriteLine($"{landmark.Name} with confidence {landmark.Confidence}");
                    }
                }
            }
            Console.WriteLine();
            // Identifies the color scheme.
            Console.WriteLine("Color Scheme:");
            Console.WriteLine("Is black and white?: " + results.Color.IsBWImg);
            Console.WriteLine("Accent color: " + results.Color.AccentColor);
            Console.WriteLine("Dominant background color: " + results.Color.DominantColorBackground);
            Console.WriteLine("Dominant foreground color: " + results.Color.DominantColorForeground);
            Console.WriteLine("Dominant colors: " + string.Join(",", results.Color.DominantColors));
            Console.WriteLine();
            // Detects the image types.
            Console.WriteLine("Image Type:");
            Console.WriteLine("Clip Art Type: " + results.ImageType.ClipArtType);
            Console.WriteLine("Line Drawing Type: " + results.ImageType.LineDrawingType);
            Console.WriteLine();
以下代码调用图像分析 API 并将结果输出到控制台。
            // Call the Computer Vision service and tell it to analyze the loaded image.
            ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
                    .withVisualFeatures(featuresToExtractFromRemoteImage).execute();
            // Display image captions and confidence values.
            System.out.println("\nCaptions: ");
            for (ImageCaption caption : analysis.description().captions()) {
                System.out.printf("\'%s\' with confidence %f\n", caption.text(), caption.confidence());
            }
            // Display image category names and confidence values.
            System.out.println("\nCategories: ");
            for (Category category : analysis.categories()) {
                System.out.printf("\'%s\' with confidence %f\n", category.name(), category.score());
            }
            // Display image tags and confidence values.
            System.out.println("\nTags: ");
            for (ImageTag tag : analysis.tags()) {
                System.out.printf("\'%s\' with confidence %f\n", tag.name(), tag.confidence());
            }
            // Display any faces found in the image and their location.
            System.out.println("\nFaces: ");
            for (FaceDescription face : analysis.faces()) {
                System.out.printf("\'%s\' of age %d at location (%d, %d), (%d, %d)\n", face.gender(), face.age(),
                        face.faceRectangle().left(), face.faceRectangle().top(),
                        face.faceRectangle().left() + face.faceRectangle().width(),
                        face.faceRectangle().top() + face.faceRectangle().height());
            }
            // Display whether any adult or racy content was detected and the confidence
            // values.
            System.out.println("\nAdult: ");
            System.out.printf("Is adult content: %b with confidence %f\n", analysis.adult().isAdultContent(),
                    analysis.adult().adultScore());
            System.out.printf("Has racy content: %b with confidence %f\n", analysis.adult().isRacyContent(),
                    analysis.adult().racyScore());
            // Display the image color scheme.
            System.out.println("\nColor scheme: ");
            System.out.println("Is black and white: " + analysis.color().isBWImg());
            System.out.println("Accent color: " + analysis.color().accentColor());
            System.out.println("Dominant background color: " + analysis.color().dominantColorBackground());
            System.out.println("Dominant foreground color: " + analysis.color().dominantColorForeground());
            System.out.println("Dominant colors: " + String.join(", ", analysis.color().dominantColors()));
            // Display any celebrities detected in the image and their locations.
            System.out.println("\nCelebrities: ");
            for (Category category : analysis.categories()) {
                if (category.detail() != null && category.detail().celebrities() != null) {
                    for (CelebritiesModel celeb : category.detail().celebrities()) {
                        System.out.printf("\'%s\' with confidence %f at location (%d, %d), (%d, %d)\n", celeb.name(),
                                celeb.confidence(), celeb.faceRectangle().left(), celeb.faceRectangle().top(),
                                celeb.faceRectangle().left() + celeb.faceRectangle().width(),
                                celeb.faceRectangle().top() + celeb.faceRectangle().height());
                    }
                }
            }
            // Display any landmarks detected in the image and their locations.
            System.out.println("\nLandmarks: ");
            for (Category category : analysis.categories()) {
                if (category.detail() != null && category.detail().landmarks() != null) {
                    for (LandmarksModel landmark : category.detail().landmarks()) {
                        System.out.printf("\'%s\' with confidence %f\n", landmark.name(), landmark.confidence());
                    }
                }
            }
            // Display what type of clip art or line drawing the image is.
            System.out.println("\nImage type:");
            System.out.println("Clip art type: " + analysis.imageType().clipArtType());
            System.out.println("Line drawing type: " + analysis.imageType().lineDrawingType());
以下代码调用图像分析 API 并将结果输出到控制台。
      const result = (await computerVisionClient.analyzeImage(facesImageURL,{visualFeatures: features},{details: domainDetails}));
      // Detect faces
      // Print the bounding box, gender, and age from the faces.
      const faces = result.faces
      if (faces.length) {
        console.log(`${faces.length} face${faces.length == 1 ? '' : 's'} found:`);
        for (const face of faces) {
          console.log(`    Gender: ${face.gender}`.padEnd(20)
            + ` Age: ${face.age}`.padEnd(10) + `at ${formatRectFaces(face.faceRectangle)}`);
        }
      } else { console.log('No faces found.'); }
      // Formats the bounding box
      function formatRectFaces(rect) {
        return `top=${rect.top}`.padEnd(10) + `left=${rect.left}`.padEnd(10) + `bottom=${rect.top + rect.height}`.padEnd(12)
          + `right=${rect.left + rect.width}`.padEnd(10) + `(${rect.width}x${rect.height})`;
      }
      // Detect Objects
      const objects = result.objects;
      console.log();
      // Print objects bounding box and confidence
      if (objects.length) {
        console.log(`${objects.length} object${objects.length == 1 ? '' : 's'} found:`);
        for (const obj of objects) { console.log(`    ${obj.object} (${obj.confidence.toFixed(2)}) at ${formatRectObjects(obj.rectangle)}`); }
      } else { console.log('No objects found.'); }
      // Formats the bounding box
      function formatRectObjects(rect) {
        return `top=${rect.y}`.padEnd(10) + `left=${rect.x}`.padEnd(10) + `bottom=${rect.y + rect.h}`.padEnd(12)
          + `right=${rect.x + rect.w}`.padEnd(10) + `(${rect.w}x${rect.h})`;
      }
      console.log();
      // Detect tags
      const tags = result.tags;
      console.log(`Tags: ${formatTags(tags)}`);
      // Format tags for display
      function formatTags(tags) {
        return tags.map(tag => (`${tag.name} (${tag.confidence.toFixed(2)})`)).join(', ');
      }
      console.log();
      // Detect image type
      const types = result.imageType;
      console.log(`Image appears to be ${describeType(types)}`);
      function describeType(imageType) {
        if (imageType.clipArtType && imageType.clipArtType > imageType.lineDrawingType) return 'clip art';
        if (imageType.lineDrawingType && imageType.clipArtType < imageType.lineDrawingType) return 'a line drawing';
        return 'a photograph';
      }
      console.log();
      // Detect Category
      const categories = result.categories;
      console.log(`Categories: ${formatCategories(categories)}`);
      // Formats the image categories
      function formatCategories(categories) {
        categories.sort((a, b) => b.score - a.score);
        return categories.map(cat => `${cat.name} (${cat.score.toFixed(2)})`).join(', ');
      }
      console.log();
      // Detect Brands
      const brands = result.brands;
      // Print the brands found
      if (brands.length) {
        console.log(`${brands.length} brand${brands.length != 1 ? 's' : ''} found:`);
        for (const brand of brands) {
          console.log(`    ${brand.name} (${brand.confidence.toFixed(2)} confidence)`);
        }
      } else { console.log(`No brands found.`); }
      console.log();
      // Detect Colors
      const color = result.color;
      printColorScheme(color);
      // Print a detected color scheme
      function printColorScheme(colors) {
        console.log(`Image is in ${colors.isBwImg ? 'black and white' : 'color'}`);
        console.log(`Dominant colors: ${colors.dominantColors.join(', ')}`);
        console.log(`Dominant foreground color: ${colors.dominantColorForeground}`);
        console.log(`Dominant background color: ${colors.dominantColorBackground}`);
        console.log(`Suggested accent color: #${colors.accentColor}`);
      }
      console.log();
      // Detect landmarks
      const domain = result.landmarks;
      // Prints domain-specific, recognized objects
      if (domain.length) {
        console.log(`${domain.length} ${domain.length == 1 ? 'landmark' : 'landmarks'} found:`);
        for (const obj of domain) {
          console.log(`    ${obj.name}`.padEnd(20) + `(${obj.confidence.toFixed(2)} confidence)`.padEnd(20) + `${formatRectDomain(obj.faceRectangle)}`);
        }
      } else {
        console.log('No landmarks found.');
      }
      // Formats bounding box
      function formatRectDomain(rect) {
        if (!rect) return '';
        return `top=${rect.top}`.padEnd(10) + `left=${rect.left}`.padEnd(10) + `bottom=${rect.top + rect.height}`.padEnd(12) +
          `right=${rect.left + rect.width}`.padEnd(10) + `(${rect.width}x${rect.height})`;
      }
 
      console.log();
      // Detect Adult content
      // Function to confirm racy or not
      const isIt = flag => flag ? 'is' : "isn't";
      const adult = result.adult;
      console.log(`This probably ${isIt(adult.isAdultContent)} adult content (${adult.adultScore.toFixed(4)} score)`);
      console.log(`This probably ${isIt(adult.isRacyContent)} racy content (${adult.racyScore.toFixed(4)} score)`);
      console.log();
以下代码调用图像分析 API 并将结果输出到控制台。
# Call API with URL and features
results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features, remote_image_details)
# Print results with confidence score
print("Categories from remote image: ")
if (len(results_remote.categories) == 0):
    print("No categories detected.")
else:
    for category in results_remote.categories:
        print("'{}' with confidence {:.2f}%".format(category.name, category.score * 100))
print()
# Detect faces
# Print the results with gender, age, and bounding box
print("Faces in the remote image: ")
if (len(results_remote.faces) == 0):
    print("No faces detected.")
else:
    for face in results_remote.faces:
        print("'{}' of age {} at location {}, {}, {}, {}".format(face.gender, face.age, \
        face.face_rectangle.left, face.face_rectangle.top, \
        face.face_rectangle.left + face.face_rectangle.width, \
        face.face_rectangle.top + face.face_rectangle.height))
# Adult content
# Print results with adult/racy score
print("Analyzing remote image for adult or racy content ... ")
print("Is adult content: {} with confidence {:.2f}".format(results_remote.adult.is_adult_content, results_remote.adult.adult_score * 100))
print("Has racy content: {} with confidence {:.2f}".format(results_remote.adult.is_racy_content, results_remote.adult.racy_score * 100))
print()
# Detect colors
# Print results of color scheme
print("Getting color scheme of the remote image: ")
print("Is black and white: {}".format(results_remote.color.is_bw_img))
print("Accent color: {}".format(results_remote.color.accent_color))
print("Dominant background color: {}".format(results_remote.color.dominant_color_background))
print("Dominant foreground color: {}".format(results_remote.color.dominant_color_foreground))
print("Dominant colors: {}".format(results_remote.color.dominant_colors))
print()
# Detect image type
# Prints type results with degree of accuracy
print("Type of remote image:")
if results_remote.image_type.clip_art_type == 0:
    print("Image is not clip art.")
elif results_remote.image_type.line_drawing_type == 1:
    print("Image is ambiguously clip art.")
elif results_remote.image_type.line_drawing_type == 2:
    print("Image is normal clip art.")
else:
    print("Image is good clip art.")
if results_remote.image_type.line_drawing_type == 0:
    print("Image is not a line drawing.")
else:
    print("Image is a line drawing")
# Detect brands
print("Detecting brands in remote image: ")
if len(results_remote.brands) == 0:
    print("No brands detected.")
else:
    for brand in results_remote.brands:
        print("'{}' brand detected with confidence {:.1f}% at location {}, {}, {}, {}".format( \
        brand.name, brand.confidence * 100, brand.rectangle.x, brand.rectangle.x + brand.rectangle.w, \
        brand.rectangle.y, brand.rectangle.y + brand.rectangle.h))
# Detect objects
# Print detected objects results with bounding boxes
print("Detecting objects in remote image:")
if len(results_remote.objects) == 0:
    print("No objects detected.")
else:
    for object in detect_objects_results_remote.objects:
        print("object at location {}, {}, {}, {}".format( \
        object.rectangle.x, object.rectangle.x + object.rectangle.w, \
        object.rectangle.y, object.rectangle.y + object.rectangle.h))
# Describe image
# Get the captions (descriptions) from the response, with confidence level
print("Description of remote image: ")
if (len(results_remote.description.captions) == 0):
    print("No description detected.")
else:
    for caption in results_remote.description.captions:
        print("'{}' with confidence {:.2f}%".format(caption.text, caption.confidence * 100))
print()
# Return tags
# Print results with confidence score
print("Tags in the remote image: ")
if (len(results_remote.tags) == 0):
    print("No tags detected.")
else:
    for tag in results_remote.tags:
        print("'{}' with confidence {:.2f}%".format(tag.name, tag.confidence * 100))
# Detect celebrities
print("Celebrities in the remote image:")
if (len(results_remote.categories.detail.celebrities) == 0):
    print("No celebrities detected.")
else:
    for celeb in results_remote.categories.detail.celebrities:
        print(celeb["name"])
# Detect landmarks
print("Landmarks in the remote image:")
if len(results_remote.categories.detail.landmarks) == 0:
    print("No landmarks detected.")
else:
    for landmark in results_remote.categories.detail.landmarks:
        print(landmark["name"])
提示
使用 Azure AI 视觉时,可能会遇到服务强制实施的速率限制所导致的暂时性错误,或者网络中断等其他暂时性问题。 有关处理这些类型的故障的信息,请参阅云设计模式指南中的 重试模式 以及相关的 断路器模式。
相关内容
- 探索对象检测的概念