Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
After the deployment is added successfully, you can query the deployment for intent and entities predictions from your utterance based on the model you assigned to the deployment. You can query the deployment programmatically through the prediction API or through the client libraries (Azure SDK).
You can use Language Studio to submit an utterance, get predictions and visualize the results.
To test your deployed models from within the Language Studio:
Select Testing deployments from the left side menu.
For multilingual projects, from the Select text language dropdown, select the language of the utterance you're testing.
From the Deployment name dropdown, select the deployment name corresponding to the model that you want to test. You can only test models that are assigned to deployments.
In the text box, enter an utterance to test. For example, if you created an application for email-related utterances you could enter Delete this email.
Towards the top of the page, select Run the test.
After you run the test, you should see the response of the model in the result. You can view the results in entities cards view or view it in JSON format.
First you will need to get your resource key and endpoint:
Go to your resource overview page in the Azure portal. From the menu on the left side, select Keys and Endpoint. You will use the endpoint and key for the API requests
Create a POST request using the following URL, headers, and JSON body to start testing a conversational language understanding model.
{ENDPOINT}/language/:analyze-conversations?api-version={API-VERSION}
Placeholder | Value | Example |
---|---|---|
{ENDPOINT} |
The endpoint for authenticating your API request. | https://<your-custom-subdomain>.cognitiveservices.azure.cn |
{API-VERSION} |
The version of the API you are calling. | 2023-04-01 |
Use the following header to authenticate your request.
Key | Value |
---|---|
Ocp-Apim-Subscription-Key |
The key to your resource. Used for authenticating your API requests. |
{
"kind": "Conversation",
"analysisInput": {
"conversationItem": {
"id": "1",
"participantId": "1",
"text": "Text 1"
}
},
"parameters": {
"projectName": "{PROJECT-NAME}",
"deploymentName": "{DEPLOYMENT-NAME}",
"stringIndexType": "TextElement_V8"
}
}
Key | Placeholder | Value | Example |
---|---|---|---|
participantId |
{JOB-NAME} |
"MyJobName |
|
id |
{JOB-NAME} |
"MyJobName |
|
text |
{TEST-UTTERANCE} |
The utterance that you want to predict its intent and extract entities from. | "Read Matt's email |
projectName |
{PROJECT-NAME} |
The name of your project. This value is case-sensitive. | myProject |
deploymentName |
{DEPLOYMENT-NAME} |
The name of your deployment. This value is case-sensitive. | staging |
Once you send the request, you will get the following response for the prediction
{
"kind": "ConversationResult",
"result": {
"query": "Text1",
"prediction": {
"topIntent": "inten1",
"projectKind": "Conversation",
"intents": [
{
"category": "intent1",
"confidenceScore": 1
},
{
"category": "intent2",
"confidenceScore": 0
},
{
"category": "intent3",
"confidenceScore": 0
}
],
"entities": [
{
"category": "entity1",
"text": "text1",
"offset": 29,
"length": 12,
"confidenceScore": 1
}
]
}
}
}
Key | Sample Value | Description |
---|---|---|
query | "Read Matt's email" | the text you submitted for query. |
topIntent | "Read" | The predicted intent with highest confidence score. |
intents | [] | A list of all the intents that were predicted for the query text each of them with a confidence score. |
entities | [] | array containing list of extracted entities from the query text. |
In a conversations project, you'll get predictions for both your intents and entities that are present within your project.
- The intents and entities include a confidence score between 0.0 to 1.0 associated with how confident the model is about predicting a certain element in your project.
- The top scoring intent is contained within its own parameter.
- Only predicted entities will show up in your response.
- Entities indicate:
- The text of the entity that was extracted
- Its start location denoted by an offset value
- The length of the entity text denoted by a length value.