Migrate from retired intent recognition

Intent recognition in Azure Speech was retired on September 30, 2025. Applications can no longer use intent recognition via Speech. However, you can still perform intent recognition using Azure Language Service.

This change doesn't affect other Speech capabilities such as speech to text (including no change to speaker diarization), text to speech, and speech translation.

Speech previously exposed the IntentRecognizer object family in the Speech SDK. These APIs depended on a Language Understanding Intelligent Service (LUIS) application or simple pattern matching constructs. With the retirement:

  • IntentRecognizer, pattern matching intents/entities, and related parameters are no longer available.
  • Existing applications must remove direct Speech SDK intent logic and adopt a two-step approach (speech to text, then intent classification) or a single prompt-based approach.

Choose an alternative

Requirement Recommended service Why
Structured intent and entity extraction with labeled training data Language Service Conversational Language Understanding (CLU) Purpose-built for multi-intent classification and entity extraction; supports versions, testing, and analytics.
Multilingual speech input flowed into consistent intent schema Speech (STT) + CLU Speech handles transcription; CLU handles normalization and classification.

Migration steps

  1. Replace any Speech SDK IntentRecognizer usage with SpeechRecognizer or ConversationTranscriber to obtain text.
  2. For structured intent/entity needs, create a CLU project and deploy a model. Send transcribed utterances to the CLU prediction API.
  3. Remove dependencies on LanguageUnderstandingModel and any LUIS application IDs or endpoints from configuration.
  4. Eliminate pattern matching code referencing PatternMatchingIntent or PatternMatchingEntity types.
  5. Validate accuracy by comparing historic IntentRecognizer outputs to CLU classification results or OpenAI completions, adjusting training data or prompts as needed.
  6. Update monitoring: shift any existing intent latency/accuracy dashboards to new sources (CLU evaluation logs or OpenAI prompt result tracking).

Sample architecture

  1. Speech to text transcribes audio into text with real-time or batch mode.
  2. Response is normalized into a common JSON shape (for example: { "intent": "BookFlight", "entities": { "Destination": "Seattle" } }).
  3. Business logic routes the normalized output to downstream services (booking, knowledge base, workflow engine).

Result format considerations

Aspect CLU
Schema stability High (defined intents/entities)
Versioning Built-in model versions
Training effort Requires labeled dataset
Edge cases Requires more labeled data
Latency Prediction API call

Frequently asked questions

Do I need to re-label data? If you used LUIS, you need to export and reimport data into CLU, then retrain. Mapping is often direct (intents, entities). Pattern matching intents might require manual conversion to examples.

Is speaker diarization affected? No. Diarization features continue; you just process each speaker segment through CLU or OpenAI after transcription.