Azure AI services
Azure AI services help developers and organizations rapidly create intelligent, cutting-edge, market-ready, and responsible applications with out-of-the-box and pre-built and customizable APIs and models.
SynapseML allows you to build powerful and highly scalable predictive and analytical models from various Spark data sources. Synapse Spark provide built-in SynapseML libraries including synapse.ml.services.
Important
Starting on September 20th, 2023 you won't be able to create new Anomaly Detector resources. The Anomaly Detector service is being retired on October 1st, 2026.
Prerequisites on Azure Synapse Analytics
The tutorial, Pre-requisites for using Azure AI services in Azure Synapse, walks you through a couple steps you need to perform before using Azure AI services in Synapse Analytics.
Azure AI services is a suite of APIs, SDKs, and services that developers can use to add intelligent features to their applications. AI services empower developers even when they don't have direct AI or data science skills or knowledge. Azure AI services help developers create applications that can see, hear, speak, understand, and even begin to reason. The catalog of services within Azure AI services can be categorized into five main pillars: Vision, Speech, Language, Web search, and Decision.
Usage
Vision
- Describe: provides description of an image in human readable language (Scala, Python)
- Analyze (color, image type, face, adult/racy content): analyzes visual features of an image (Scala, Python)
- OCR: reads text from an image (Scala, Python)
- Recognize Text: reads text from an image (Scala, Python)
- Thumbnail: generates a thumbnail of user-specified size from the image (Scala, Python)
- Recognize domain-specific content: recognizes domain-specific content (celebrity, landmark) (Scala, Python)
- Tag: identifies list of words that are relevant to the input image (Scala, Python)
Speech
- Speech-to-text: transcribes audio streams (Scala, Python)
- Conversation Transcription: transcribes audio streams into live transcripts with identified speakers. (Scala, Python)
- Text to Speech: Converts text to realistic audio (Scala, Python)
Language
- Language detection: detects language of the input text (Scala, Python)
- Key phrase extraction: identifies the key talking points in the input text (Scala, Python)
- Named entity recognition: identifies known entities and general named entities in the input text (Scala, Python)
- Sentiment analysis: returns a score between 0 and 1 indicating the sentiment in the input text (Scala, Python)
- Healthcare Entity Extraction: Extracts medical entities and relationships from text. (Scala, Python)
Translation
- Translate: Translates text. (Scala, Python)
- Transliterate: Converts text in one language from one script to another script. (Scala, Python)
- Detect: Identifies the language of a piece of text. (Scala, Python)
- BreakSentence: Identifies the positioning of sentence boundaries in a piece of text. (Scala, Python)
- Dictionary Lookup: Provides alternative translations for a word and a small number of idiomatic phrases. (Scala, Python)
- Dictionary Examples: Provides examples that show how terms in the dictionary are used in context. (Scala, Python)
- Document Translation: Translates documents across all supported languages and dialects while preserving document structure and data format. (Scala, Python)
Document Intelligence
- Analyze Layout: Extract text and layout information from a given document. (Scala, Python)
- Analyze Receipts: Detects and extracts data from receipts using optical character recognition (OCR) and our receipt model, enabling you to easily extract structured data from receipts such as merchant name, merchant phone number, transaction date, transaction total, and more. (Scala, Python)
- Analyze Business Cards: Detects and extracts data from business cards using optical character recognition (OCR) and our business card model, enabling you to easily extract structured data from business cards such as contact names, company names, phone numbers, emails, and more. (Scala, Python)
- Analyze Invoices: Detects and extracts data from invoices using optical character recognition (OCR) and our invoice understanding deep learning models, enabling you to easily extract structured data from invoices such as customer, vendor, invoice ID, invoice due date, total, invoice amount due, tax amount, ship to, bill to, line items and more. (Scala, Python)
- Analyze ID Documents: Detects and extracts data from identification documents using optical character recognition (OCR) and our ID document model, enabling you to easily extract structured data from ID documents such as first name, last name, date of birth, document number, and more. (Scala, Python)
- Analyze Custom Form: Extracts information from forms (PDFs and images) into structured data based on a model created from a set of representative training forms. (Scala, Python)
- Get Custom Model: Get detailed information about a custom model. (Scala, Python)
- List Custom Models: Get information about all custom models. (Scala, Python)
Decision
- Anomaly status of latest point: generates a model using preceding points and determines whether the latest point is anomalous (Scala, Python)
- Find anomalies: generates a model using an entire series and finds anomalies in the series (Scala, Python)
Prepare your system
To begin, import required libraries and initialize your Spark session.
from pyspark.sql.functions import udf, col
from synapse.ml.io.http import HTTPTransformer, http_udf
from requests import Request
from pyspark.sql.functions import lit
from pyspark.ml import PipelineModel
from pyspark.sql.functions import col
Import Azure AI services libraries and replace the keys and locations in the following code snippet with your Azure AI services key and location.
from synapse.ml.services import *
from synapse.ml.core.platform import *
# A general AI services key for AI Language, Computer Vision and Document Intelligence (or use separate keys that belong to each service)
service_key = find_secret(
secret_name="ai-services-api-key", keyvault="mmlspark-build-keys"
) # Replace the call to find_secret with your key as a python string. e.g. service_key="27snaiw..."
service_loc = "chinanorth3"
# An Anomaly Detector subscription key
anomaly_key = find_secret(
secret_name="anomaly-api-key", keyvault="mmlspark-build-keys"
) # Replace the call to find_secret with your key as a python string. If you don't have an anomaly detection resource created before Sep 20th 2023, you won't be able to create one.
anomaly_loc = "chinanorth2"
# A Translator subscription key
translator_key = find_secret(
secret_name="translator-key", keyvault="mmlspark-build-keys"
) # Replace the call to find_secret with your key as a python string.
translator_loc = "chinaeast2"
Perform sentiment analysis on text
The AI Language service provides several algorithms for extracting intelligent insights from text. For example, we can find the sentiment of given input text. The service will return a score between 0.0 and 1.0 where low scores indicate negative sentiment and high score indicates positive sentiment. This sample uses three simple sentences and returns the sentiment for each.
# Create a dataframe that's tied to it's column names
df = spark.createDataFrame(
[
("I am so happy today, its sunny!", "en-US"),
("I am frustrated by this rush hour traffic", "en-US"),
("The AI services on spark aint bad", "en-US"),
],
["text", "language"],
)
# Run the Text Analytics service with options
sentiment = (
AnalyzeText()
.setKind("SentimentAnalysis")
.setTextCol("text")
.setLocation(service_loc)
.setSubscriptionKey(service_key)
.setEndpoint("")
.setOutputCol("sentiment")
.setErrorCol("error")
.setLanguageCol("language")
)
# Show the results of your text query in a table format
display(
sentiment.transform(df).select(
"text", col("sentiment.documents.sentiment").alias("sentiment")
)
)
Perform text analytics for health data
The Text Analytics for Health Service extracts and labels relevant medical information from unstructured text such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
The following code sample analyzes and transforms text from doctors notes into structured data.
df = spark.createDataFrame(
[
("20mg of ibuprofen twice a day",),
("1tsp of Tylenol every 4 hours",),
("6-drops of Vitamin B-12 every evening",),
],
["text"],
)
healthcare = (
AnalyzeHealthText()
.setSubscriptionKey(service_key)
.setLocation(service_loc)
.setEndpoint("")
.setLanguage("en")
.setOutputCol("response")
)
display(healthcare.transform(df))
Translate text into a different language
Translator is a cloud-based machine translation service and is part of the Azure AI services family of AI APIs used to build intelligent apps. Translator is easy to integrate in your applications, websites, tools, and solutions. It allows you to add multi-language user experiences in 90 languages and dialects and can be used to translate text without hosting your own algorithm.
The following code sample does a simple text translation by providing the sentences you want to translate and target languages you want to translate them to.
from pyspark.sql.functions import col, flatten
# Create a dataframe including sentences you want to translate
df = spark.createDataFrame(
[(["Hello, what is your name?", "Bye"],)],
[
"text",
],
)
# Run the Translator service with options
translate = (
Translate()
.setSubscriptionKey(translator_key)
.setLocation(translator_loc)
.setEndpoint("")
.setTextCol("text")
.setToLanguage(["zh-Hans"])
.setOutputCol("translation")
)
# Show the results of the translation.
display(
translate.transform(df)
.withColumn("translation", flatten(col("translation.translations")))
.withColumn("translation", col("translation.text"))
.select("translation")
)
Extract information from a document into structured data
Azure AI Document Intelligence is a part of Azure Applied AI Services that lets you build automated data processing software using machine learning technology. With Azure AI Document Intelligence, you can identify and extract text, key/value pairs, selection marks, tables, and structure from your documents. The service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more.
The following code sample analyzes a business card image and extracts its information into structured data.
from pyspark.sql.functions import col, explode
# Create a dataframe containing the source files
imageDf = spark.createDataFrame(
[
(
"https://mmlspark.blob.core.windows.net/datasets/FormRecognizer/business_card.jpg",
)
],
[
"source",
],
)
# Run the Form Recognizer service
analyzeBusinessCards = (
AnalyzeBusinessCards()
.setSubscriptionKey(service_key)
.setLocation(service_loc)
.setEndpoint("")
.setImageUrlCol("source")
.setOutputCol("businessCards")
)
# Show the results of recognition.
display(
analyzeBusinessCards.transform(imageDf)
.withColumn(
"documents", explode(col("businessCards.analyzeResult.documentResults.fields"))
)
.select("source", "documents")
)
Computer Vision sample
Azure AI Vision analyzes images to identify structure such as faces, objects, and natural-language descriptions.
The following code sample analyzes images and labels them with tags. Tags are one-word descriptions of things in the image, such as recognizable objects, people, scenery, and actions.
# Create a dataframe with the image URLs
base_url = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/"
df = spark.createDataFrame(
[
(base_url + "objects.jpg",),
(base_url + "dog.jpg",),
(base_url + "house.jpg",),
],
[
"image",
],
)
# Run the Computer Vision service. Analyze Image extracts information from/about the images.
analysis = (
AnalyzeImage()
.setLocation(service_loc)
.setSubscriptionKey(service_key)
.setEndpoint("")
.setVisualFeatures(
["Categories", "Color", "Description", "Faces", "Objects", "Tags"]
)
.setOutputCol("analysis_results")
.setImageUrlCol("image")
.setErrorCol("error")
)
# Show the results of what you wanted to pull out of the images.
display(analysis.transform(df).select("image", "analysis_results.description.tags"))
Transform speech to text
The Speech-to-text service converts streams or files of spoken audio to text. The following code sample transcribes one audio file to text.
# Create a dataframe with our audio URLs, tied to the column called "url"
df = spark.createDataFrame(
[("https://mmlspark.blob.core.windows.net/datasets/Speech/audio2.wav",)], ["url"]
)
# Run the Speech-to-text service to translate the audio into text
speech_to_text = (
SpeechToTextSDK()
.setSubscriptionKey(service_key)
.setLocation(service_loc)
.setEndpointId("")
.setOutputCol("text")
.setAudioDataCol("url")
.setLanguage("en-US")
.setProfanity("Masked")
)
# Show the results of the translation
display(speech_to_text.transform(df).select("url", "text.DisplayText"))
Transform text to speech
Text to speech is a service that allows you to build apps and services that speak naturally, choosing from more than 270 neural voices across 119 languages and variants.
The following code sample transforms text into an audio file that contains the content of the text.
from synapse.ml.services.speech import TextToSpeech
fs = ""
if running_on_databricks():
fs = "dbfs:"
elif running_on_synapse_internal():
fs = "Files"
# Create a dataframe with text and an output file location
df = spark.createDataFrame(
[
(
"Reading out loud is fun! Check out aka.ms/spark for more information",
fs + "/output.mp3",
)
],
["text", "output_file"],
)
tts = (
TextToSpeech()
.setSubscriptionKey(service_key)
.setTextCol("text")
.setLocation(service_loc)
.setUrl("https://<service_loc>.tts.speech.azure.cn/cognitiveservices/v1")
.setVoiceName("en-US-JennyNeural")
.setOutputFileCol("output_file")
)
# Check to make sure there were no errors during audio creation
display(tts.transform(df))
Detect anomalies in time series data
If you don't have an anomaly detection resource created before Sep 20th 2023, you won't be able to create one. You may want to skip this part.
Anomaly Detector is great for detecting irregularities in your time series data. The following code sample uses the Anomaly Detector service to find anomalies in a time series.
# Create a dataframe with the point data that Anomaly Detector requires
df = spark.createDataFrame(
[
("1972-01-01T00:00:00Z", 826.0),
("1972-02-01T00:00:00Z", 799.0),
("1972-03-01T00:00:00Z", 890.0),
("1972-04-01T00:00:00Z", 900.0),
("1972-05-01T00:00:00Z", 766.0),
("1972-06-01T00:00:00Z", 805.0),
("1972-07-01T00:00:00Z", 821.0),
("1972-08-01T00:00:00Z", 20000.0),
("1972-09-01T00:00:00Z", 883.0),
("1972-10-01T00:00:00Z", 898.0),
("1972-11-01T00:00:00Z", 957.0),
("1972-12-01T00:00:00Z", 924.0),
("1973-01-01T00:00:00Z", 881.0),
("1973-02-01T00:00:00Z", 837.0),
("1973-03-01T00:00:00Z", 9000.0),
],
["timestamp", "value"],
).withColumn("group", lit("series1"))
# Run the Anomaly Detector service to look for irregular data
anamoly_detector = (
SimpleDetectAnomalies()
.setSubscriptionKey(anomaly_key)
.setLocation(anomaly_loc)
.setEndpoint("")
.setTimestampCol("timestamp")
.setValueCol("value")
.setOutputCol("anomalies")
.setGroupbyCol("group")
.setGranularity("monthly")
)
# Show the full results of the analysis with the anomalies marked as "True"
display(
anamoly_detector.transform(df).select("timestamp", "value", "anomalies.isAnomaly")
)
Get information from arbitrary web APIs
With HTTP on Spark, any web service can be used in your big data pipeline. In this example, we use the World Bank API to get information about various countries/regions around the world.
# Use any requests from the python requests library
def world_bank_request(country):
return Request(
"GET", "http://api.worldbank.org/v2/country/{}?format=json".format(country)
)
# Create a dataframe with specifies which countries/regions we want data on
df = spark.createDataFrame([("br",), ("usa",)], ["country"]).withColumn(
"request", http_udf(world_bank_request)(col("country"))
)
# Much faster for big data because of the concurrency :)
client = (
HTTPTransformer().setConcurrency(3).setInputCol("request").setOutputCol("response")
)
# Get the body of the response
def get_response_body(resp):
return resp.entity.content.decode()
# Show the details of the country data returned
display(
client.transform(df).select(
"country", udf(get_response_body)(col("response")).alias("response")
)
)