快速入门:使用适用于 Ruby 的文本分析客户端库Quickstart: Use the Text Analytics client library for Ruby

从文本分析客户端库开始操作。Get started with the Text Analytics client library. 请按照以下步骤安装程序包并试用基本任务的示例代码。Follow these steps to install the package and try out the example code for basic tasks.

使用文本分析客户端库执行:Use the Text Analytics client library to perform:

  • 情绪分析Sentiment analysis
  • 语言检测Language detection
  • 实体识别Entity recognition
  • 关键短语提取Key phrase extraction

备注

本快速入门仅适用于文本分析 2.1 版。This quickstart only applies to Text Analytics version 2.1. 目前,适用于 Ruby 的 v3 客户端库不可用。Currently, a v3 client library for Ruby is unavailable.

库源代码 | 包 (RubyGems) | 示例Library source code | Package (RubyGems) | Samples

先决条件Prerequisites

  • Azure 订阅 - 创建试用订阅An Azure subscription - create one for trial
  • 最新版本的 RubyThe current version of Ruby
  • 你有了 Azure 订阅后,将在 Azure 门户中创建文本分析资源 ,以获取你的密钥和终结点。Once you have your Azure subscription, create a Text Analytics resource in the Azure portal to get your key and endpoint.
    • 你需要从创建的资源获取密钥和终结点,以便将应用程序连接到文本分析 API。You will need the key and endpoint from the resource you create to connect your application to the Text Analytics API. 稍后会在本快速入门中执行此操作。You'll do this later in the quickstart.
    • 可以使用免费定价层试用该服务,然后再升级到付费层进行生产。You can use the free pricing tier to try the service, and upgrade later to a paid tier for production.

设置Setting up

创建新的 Ruby 应用程序Create a new Ruby application

在控制台窗口(例如 cmd、PowerShell 或 Bash)中,为应用创建一个新目录并导航到该目录。In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it. 然后,创建名为 GemFile 的文件,并为你的代码创建一个 Ruby 文件。Then create a file named GemFile, and a Ruby file for your code.

mkdir myapp && cd myapp

GemFile 中,添加以下行以将客户端库添加为依赖项。In your GemFile, add the following lines to add the client library as a dependency.

source 'https://rubygems.org'
gem 'azure_cognitiveservices_textanalytics', '~>0.17.3'

在 Ruby 文件中,导入以下包。In your Ruby file, import the following packages.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

为资源的 Azure 终结点和密钥创建变量。Create variables for your resource's Azure endpoint and key.

重要

转到 Azure 门户。Go to the Azure portal. 如果在“先决条件”部分中创建的文本分析资源已成功部署,请单击“后续步骤”下的“转到资源”按钮 。If the Text Analytics resource you created in the Prerequisites section deployed successfully, click the Go to Resource button under Next Steps. 在资源的“密钥和终结点”页的“资源管理”下可以找到密钥和终结点 。You can find your key and endpoint in the resource's key and endpoint page, under resource management.

完成后,请记住将密钥从代码中删除,并且永远不要公开发布该密钥。Remember to remove the key from your code when you're done, and never post it publicly. 对于生产环境,请考虑使用安全的方法来存储和访问凭据。For production, consider using a secure way of storing and accessing your credentials. 例如,Azure 密钥保管库For example, Azure key vault.

const subscription_key = '<paste-your-text-analytics-key-here>'
const endpoint = `<paste-your-text-analytics-endpoint-here>`

对象模型Object model

文本分析客户端使用你的密钥向 Azure 进行身份验证。The Text Analytics client authenticates to Azure using your key. 该客户端提供了几种方法来分析文本,文本可以是单个字符串,也可以是批处理。The client provides several methods for analyzing text, as a single string, or a batch.

文本将以 documents 的列表的形式发送到 API,该项是包含 idtextlanguage 属性的组合的 dictionary 对象,具体取决于所用的方法。Text is sent to the API as a list of documents, which are dictionary objects containing a combination of id, text, and language attributes depending on the method used. text 属性存储要以源 language 分析的文本,而 id 则可以是任何值。The text attribute stores the text to be analyzed in the origin language, and the id can be any value.

响应对象是一个列表,其中包含每个文档的分析信息。The response object is a list containing the analysis information for each document.

代码示例Code examples

这些代码片段演示如何使用适用于 Ruby 的文本分析客户端库执行以下操作:These code snippets show you how to do the following with the Text Analytics client library for Ruby:

验证客户端Authenticate the client

创建名为的 TextAnalyticsClient 的类。Create a class named TextAnalyticsClient.

class TextAnalyticsClient
  @textAnalyticsClient
  #...
end

在此类中,创建名为 initialize 的函数以使用密钥和终结点对客户端进行身份验证。In this class, create a function called initialize to authenticate the client using your key and endpoint.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

在类的外部,使用客户端的 new() 函数对其进行实例化。Outside of the class, use the client's new() function to instantiate it.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

情绪分析Sentiment analysis

在客户端对象中,创建名为 AnalyzeSentiment() 的函数,该函数采用稍后将创建的输入文档的列表。In the client object, create a function called AnalyzeSentiment() that takes a list of input documents that will be created later. 调用客户端的 sentiment() 函数并获取结果。Call the client's sentiment() function and get the result. 然后循环访问结果,输出每个文档的 ID 和情绪分数。Then iterate through the results, and print each document's ID, and sentiment score. 评分接近 0 表示消极情绪,评分接近 1 表示积极情绪。A score closer to 0 indicates a negative sentiment, while a score closer to 1 indicates a positive sentiment.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

在客户端函数外部,创建名为 SentimentAnalysisExample() 的新函数,该函数使用以前创建的 TextAnalyticsClient 对象。Outside of the client function, create a new function called SentimentAnalysisExample() that takes the TextAnalyticsClient object created earlier. 创建 MultiLanguageInput 对象的列表,其中包含需分析的文档。Create a list of MultiLanguageInput objects, containing the documents you want to analyze. 每个对象会包含 idLanguagetext 属性。Each object will contain an id, Language and a text attribute. text 属性存储要分析的文本,language 是文档的语言,id 则可以是任何值。The text attribute stores the text to be analyzed, language is the language of the document, and the id can be any value. 然后调用客户端的 AnalyzeSentiment() 函数。Then call the client's AnalyzeSentiment() function.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

调用 SentimentAnalysisExample() 函数。Call the SentimentAnalysisExample() function.

SentimentAnalysisExample(textAnalyticsClient)

输出Output

===== SENTIMENT ANALYSIS =====
Document ID: 1 , Sentiment Score: 0.87
Document ID: 2 , Sentiment Score: 0.11
Document ID: 3 , Sentiment Score: 0.44
Document ID: 4 , Sentiment Score: 1.00

语言检测Language detection

在客户端对象中,创建名为 DetectLanguage() 的函数,该函数采用稍后将创建的输入文档的列表。In the client object, create a function called DetectLanguage() that takes a list of input documents that will be created later. 调用客户端的 detect_language() 函数并获取结果。Call the client's detect_language() function and get the result. 然后循环访问结果,输出每个文档的 ID 和检测到的语言。Then iterate through the results, and print each document's ID, and detected language.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

在客户端函数外部,创建名为 DetectLanguageExample() 的新函数,该函数使用以前创建的 TextAnalyticsClient 对象。Outside of the client function, create a new function called DetectLanguageExample() that takes the TextAnalyticsClient object created earlier. 创建 LanguageInput 对象的列表,其中包含需分析的文档。Create a list of LanguageInput objects, containing the documents you want to analyze. 每个对象会包含 idtext 属性。Each object will contain an id, and a text attribute. text 属性存储要分析的文本,而 id 则可以是任何值。The text attribute stores the text to be analyzed, and the id can be any value. 然后调用客户端的 DetectLanguage() 函数。Then call the client's DetectLanguage() function.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

调用 DetectLanguageExample() 函数。Call the DetectLanguageExample() function.

DetectLanguageExample(textAnalyticsClient)

输出Output

===== LANGUAGE EXTRACTION ======
Document ID: 1 , Language: English
Document ID: 2 , Language: Spanish
Document ID: 3 , Language: Chinese_Simplified

实体识别Entity recognition

在客户端对象中,创建名为 RecognizeEntities() 的函数,该函数采用稍后将创建的输入文档的列表。In the client object, create a function called RecognizeEntities() that takes a list of input documents that will be created later. 调用客户端的 entities() 函数并获取结果。Call the client's entities() function and get the result. 然后循环访问结果,输出每个文档的 ID 和识别的实体。Then iterate through the results, and print each document's ID, and the recognized entities.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

在客户端函数外部,创建名为 RecognizeEntitiesExample() 的新函数,该函数使用以前创建的 TextAnalyticsClient 对象。Outside of the client function, create a new function called RecognizeEntitiesExample() that takes the TextAnalyticsClient object created earlier. 创建 MultiLanguageInput 对象的列表,其中包含需分析的文档。Create a list of MultiLanguageInput objects, containing the documents you want to analyze. 每个对象会包含 idlanguagetext 属性。Each object will contain an id, a language, and a text attribute. text 属性存储要分析的文本,language 是文本的语言,id 则可以是任何值。The text attribute stores the text to be analyzed, language is the language of the text, and the id can be any value. 然后调用客户端的 RecognizeEntities() 函数。Then call the client's RecognizeEntities() function.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

调用 RecognizeEntitiesExample() 函数。Call the RecognizeEntitiesExample() function.

RecognizeEntitiesExample(textAnalyticsClient)

输出Output

===== ENTITY RECOGNITION =====
Document ID: 1
        Name: Microsoft,        Type: Organization,     Sub-Type: N/A
        Offset: 0, Length: 9,   Score: 1.0

        Name: Bill Gates,       Type: Person,   Sub-Type: N/A
        Offset: 25, Length: 10, Score: 0.999847412109375

        Name: Paul Allen,       Type: Person,   Sub-Type: N/A
        Offset: 40, Length: 10, Score: 0.9988409876823425

        Name: April 4,  Type: Other,    Sub-Type: N/A
        Offset: 54, Length: 7,  Score: 0.8

        Name: April 4, 1975,    Type: DateTime, Sub-Type: Date
        Offset: 54, Length: 13, Score: 0.8

        Name: BASIC,    Type: Other,    Sub-Type: N/A
        Offset: 89, Length: 5,  Score: 0.8

        Name: Altair 8800,      Type: Other,    Sub-Type: N/A
        Offset: 116, Length: 11,        Score: 0.8

Document ID: 2
        Name: Microsoft,        Type: Organization,     Sub-Type: N/A
        Offset: 21, Length: 9,  Score: 0.999755859375

        Name: Redmond (Washington),     Type: Location, Sub-Type: N/A
        Offset: 60, Length: 7,  Score: 0.9911284446716309

        Name: 21 kilómetros,    Type: Quantity, Sub-Type: Dimension
        Offset: 71, Length: 13, Score: 0.8

        Name: Seattle,  Type: Location, Sub-Type: N/A
        Offset: 88, Length: 7,  Score: 0.9998779296875

关键短语提取Key phrase extraction

在客户端对象中,创建名为 ExtractKeyPhrases() 的函数,该函数采用稍后将创建的输入文档的列表。In the client object, create a function called ExtractKeyPhrases() that takes a list of input documents that will be created later. 调用客户端的 key_phrases() 函数并获取结果。Call the client's key_phrases() function and get the result. 然后循环访问结果,输出每个文档的 ID 以及提取的密钥短语。Then iterate through the results, and print each document's ID, and the extracted key phrases.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

在客户端函数外部,创建名为 KeyPhraseExtractionExample() 的新函数,该函数使用以前创建的 TextAnalyticsClient 对象。Outside of the client function, create a new function called KeyPhraseExtractionExample() that takes the TextAnalyticsClient object created earlier. 创建 MultiLanguageInput 对象的列表,其中包含需分析的文档。Create a list of MultiLanguageInput objects, containing the documents you want to analyze. 每个对象会包含 idlanguagetext 属性。Each object will contain an id, a language, and a text attribute. text 属性存储要分析的文本,language 是文本的语言,id 则可以是任何值。The text attribute stores the text to be analyzed, language is the language of the text, and the id can be any value. 然后调用客户端的 ExtractKeyPhrases() 函数。Then call the client's ExtractKeyPhrases() function.

# <includeStatement>
require 'azure_cognitiveservices_textanalytics'
include Azure::CognitiveServices::TextAnalytics::V2_1::Models
# </includeStatement>

class TextAnalyticsClient
  @textAnalyticsClient
  # <initialize> 
  def initialize(endpoint, key)
    credentials =
        MsRestAzure::CognitiveServicesCredentials.new(key)

    endpoint = String.new(endpoint)

    @textAnalyticsClient = Azure::TextAnalytics::Profiles::Latest::Client.new({
        credentials: credentials
    })
    @textAnalyticsClient.endpoint = endpoint
  end
  # </initialize>
  # <analyzeSentiment>
  def AnalyzeSentiment(inputDocuments)
    result = @textAnalyticsClient.sentiment(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== SENTIMENT ANALYSIS ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}: Sentiment Score: #{document.score}"
      end
    end
    puts ''
  end
  # </analyzeSentiment>
  # <detectLanguage>
  def DetectLanguage(inputDocuments)
    result = @textAnalyticsClient.detect_language(
        language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== LANGUAGE DETECTION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id} , Language: #{document.detected_languages[0].name}"
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </detectLanguage>
  # <recognizeEntities>
  def RecognizeEntities(inputDocuments)
    result = @textAnalyticsClient.entities(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== ENTITY RECOGNITION ====='
      result.documents.each do |document|
        puts "Document ID: #{document.id}"
          document.entities.each do |entity|
            puts "\tName: #{entity.name}, \tType: #{entity.type == nil ? "N/A": entity.type},\tSub-Type: #{entity.sub_type == nil ? "N/A": entity.sub_type}"
            entity.matches.each do |match|
              puts "\tOffset: #{match.offset}, \Length: #{match.length},\tScore: #{match.entity_type_score}"
            end
            puts
          end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </recognizeEntities>
  
  # <extractKeyPhrases>
  def ExtractKeyPhrases(inputDocuments)
    result = @textAnalyticsClient.key_phrases(
        multi_language_batch_input: inputDocuments
    )

    if (!result.nil? && !result.documents.nil? && result.documents.length > 0)
      puts '===== KEY PHRASE EXTRACTION ====='
      result.documents.each do |document|
        puts "Document Id: #{document.id}"
        puts '  Key Phrases'
        document.key_phrases.each do |key_phrase|
          puts "    #{key_phrase}"
        end
      end
    else
      puts 'No results data..'
    end
    puts ''
  end
  # </extractKeyPhrases>
end

# <vars>
key_var = "TEXT_ANALYTICS_SUBSCRIPTION_KEY"
if (!ENV[key_var])
    raise "Please set/export the following environment variable: " + key_var
else
    subscription_key = ENV[key_var]
end

endpoint_var = "TEXT_ANALYTICS_ENDPOINT"
if (!ENV[endpoint_var])
    raise "Please set/export the following environment variable: " + endpoint_var
else
    endpoint = ENV[endpoint_var]
end
# </vars>

# <clientCreation>
client = TextAnalyticsClient.new(endpoint, subscription_key)
# </clientCreation>

# <sentimentCall>
def SentimentAnalysisExample(client)
  # The documents to be analyzed. Add the language of the document. The ID can be any value.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'I had the best day of my life.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'en'
  input_2.text = 'This was a waste of my time. The speaker put me to sleep.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'es'
  input_3.text = 'No tengo dinero ni nada que dar...'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'it'
  input_4.text = "L'hotel veneziano era meraviglioso. È un bellissimo pezzo di architettura."

  inputDocuments =  MultiLanguageBatchInput.new
  inputDocuments.documents = [input_1, input_2, input_3, input_4]

  client.AnalyzeSentiment(inputDocuments)
end
# </sentimentCall>
# <detectLanguageCall>
def DetectLanguageExample(client)
 # The documents to be analyzed.
 language_input_1 = LanguageInput.new
 language_input_1.id = '1'
 language_input_1.text = 'This is a document written in English.'

 language_input_2 = LanguageInput.new
 language_input_2.id = '2'
 language_input_2.text = 'Este es un document escrito en Español..'

 language_input_3 = LanguageInput.new
 language_input_3.id = '3'
 language_input_3.text = '这是一个用中文写的文件'

 language_batch_input = LanguageBatchInput.new
 language_batch_input.documents = [language_input_1, language_input_2, language_input_3]

 client.DetectLanguage(language_batch_input)
end
# </detectLanguageCall>
# <recognizeEntitiesCall>
def RecognizeEntitiesExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'en'
  input_1.text = 'Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'es'
  input_2.text = 'La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.'

  multi_language_batch_input =  MultiLanguageBatchInput.new
  multi_language_batch_input.documents = [input_1, input_2]

  client.RecognizeEntities(multi_language_batch_input)
end
# </recognizeEntitiesCall>

# <keyPhrasesCall>
def KeyPhraseExtractionExample(client)
  # The documents to be analyzed.
  input_1 = MultiLanguageInput.new
  input_1.id = '1'
  input_1.language = 'ja'
  input_1.text = '猫は幸せ'

  input_2 = MultiLanguageInput.new
  input_2.id = '2'
  input_2.language = 'de'
  input_2.text = 'Fahrt nach Stuttgart und dann zum Hotel zu Fu.'

  input_3 = MultiLanguageInput.new
  input_3.id = '3'
  input_3.language = 'en'
  input_3.text = 'My cat is stiff as a rock.'

  input_4 = MultiLanguageInput.new
  input_4.id = '4'
  input_4.language = 'es'
  input_4.text = 'A mi me encanta el fútbol!'

  input_documents =  MultiLanguageBatchInput.new
  input_documents.documents = [input_1, input_2, input_3, input_4]

  client.ExtractKeyPhrases(input_documents)
end
# </keyPhrasesCall>
DetectLanguageExample(client)
SentimentAnalysisExample(client)
RecognizeEntitiesExample(client)
KeyPhraseExtractionExample(client)

调用 KeyPhraseExtractionExample() 函数。Call the KeyPhraseExtractionExample() function.

KeyPhraseExtractionExample(textAnalyticsClient)

输出Output

Document ID: 1
         Key phrases:
                幸せ
Document ID: 2
         Key phrases:
                Stuttgart
                Hotel
                Fahrt
                Fu
Document ID: 3
         Key phrases:
                cat
                rock
Document ID: 4
         Key phrases:
                fútbol