Ingest data from OpenTelemetry to Azure Data Explorer

Important

This connector can be used in Real-Time Intelligence in Microsoft Fabric. Use the instructions in this article with the following exceptions:

OpenTelemetry (OTel) is an open framework for application observability. The instrumentation is hosted by the Cloud Native Computing Foundation (CNCF), which provides standard interfaces for observability data, including metrics, logs, and traces. The OTel Collector is made up of the following three components: receivers deal with how to get data into the Collector, processors determine what to do with received data, and exporters are responsible for where to send the received data.

The Azure Data Explorer exporter supports ingestion of data from many receivers into Azure Data Explorer.

Note

In this article, you learn how to:

  • Set up your environment
  • Configure the Azure Data Explorer exporter
  • Run the sample application
  • Query incoming data

Prerequisites

Set up your environment

In this section, you prepare your environment to use the OTel exporter.

Create a Microsoft Entra app registration

Microsoft Entra application authentication is used for applications that need to access Azure Data Explorer without a user present. To ingest data using the OTel exporter, you need to create and register a Microsoft Entra service principal, and then authorize this principal to ingest data an Azure Data Explorer database.

  1. Using your Azure Data Explorer cluster, follow steps 1-7 in Create a Microsoft Entra application registration in Azure Data Explorer.
  2. Save the following values to be used in later steps:
    • Application (client) ID
    • Directory (tenant) ID
    • Client secret key value

Grant the Microsoft Entra app permissions

  1. In the query tab of the web UI, connect to your cluster. For more information on how to connect, see Add clusters.

  2. Browse to the database in which you want to ingest data.

  3. Run the following management command, replacing the placeholders. Replace DatabaseName with the name of the target database and ApplicationID with the previously saved value. This command grants the app the database ingestor role. For more information, see Manage database security roles.

    .add database <DatabaseName> ingestors ('aadapp=<ApplicationID>') 'Azure Data Explorer App Registration'
    

    Note

    The last parameter is a string that shows up as notes when you query the roles associated with a database. For more information, see View existing security roles.

Create target tables

  1. Browse to Azure Data Explorer web UI.

  2. Select Query from the left menu.

  3. Expand the target cluster in the left pane.

  4. Select the target database to give your queries the correct context.

  5. Run the following commands to create tables and schema mapping for the incoming data:

    .create-merge table <Logs-Table-Name> (Timestamp:datetime, ObservedTimestamp:datetime, TraceID:string, SpanID:string, SeverityText:string, SeverityNumber:int, Body:string, ResourceAttributes:dynamic, LogsAttributes:dynamic) 
    
    .create-merge table <Metrics-Table-Name> (Timestamp:datetime, MetricName:string, MetricType:string, MetricUnit:string, MetricDescription:string, MetricValue:real, Host:string, ResourceAttributes:dynamic,MetricAttributes:dynamic) 
    
    .create-merge table <Traces-Table-Name> (TraceID:string, SpanID:string, ParentID:string, SpanName:string, SpanStatus:string, SpanKind:string, StartTime:datetime, EndTime:datetime, ResourceAttributes:dynamic, TraceAttributes:dynamic, Events:dynamic, Links:dynamic) 
    

Set up streaming ingestion

Azure Data Explorer has two main types of ingestion: batching and streaming. For more information, see batching vs streaming ingestion. The streaming method is called managed in the Azure Data Explorer exporter configuration. Streaming ingestion may be a good choice for you if you need the logs and traces are to be available in near real time. However, streaming ingestion uses more resources than batched ingestion. The OTel framework itself batches data, which should be considered when choosing which method to use for ingestion.

Note

Streaming ingestion must be enabled on Azure Data Explorer cluster to enable the managed option. You can check if streaming is enabled using the .show database streaming ingestion policy command.

Run the following command for each of the three tables to enable streaming ingestion:

.alter table <Table-Name> policy streamingingestion enable

Configure the Azure Data Explorer exporter

In order to ingest your OpenTelemetry data into Azure Data Explorer, you need deploy and run the OpenTelemetry distribution with the following Azure Data Explorer exporter configuration.

  1. Configure the Azure Data Explorer exporter using the following fields:

    Field Description Suggested setting
    Exporters Type of exporter Azure Data Explorer
    cluster_uri URI of the Azure Data Explorer cluster that holds the database and tables https:// <cluster>.kusto.chinacloudapi.cn
    application_id Client ID <application id>
    application_key Client secret <application key>
    tenant_id Tenant <application tenant>
    db_name Database that receives the logs oteldb, or other database you have already created
    metrics_table_name The target table in the database db_name that stores exported metric data. OTELMetrics
    logs_table_name The target table in the database db_name that stores exported logs data. OTELLogs
    traces_table_name The target table in the database db_name that stores exported traces data. OTELTraces
    ingestion_type Type of ingestion: managed (streaming) or batched managed
    metrics_table_json_mapping Optional parameter. Default table mapping is defined during table creation based on OTeL metrics attributes. The default mapping can be changed using this parameter. <json metrics_table_name mapping>
    logs_table_json_mapping Optional parameter. Default table mapping is defined during table creation based on OTeL logs attributes. The default mapping can be changed using this parameter. <json logs_table_name mapping>
    traces_table_json_mapping Optional parameter. Default table mapping is defined during table creation based on OTeL trace attributes. The default mapping can be changed using this parameter. <json traces_table_name mapping>
    traces Services: traces components to enable receivers: [otlp]
    processors: [batch]
    exporters: [azuredataexplorer]
    metrics Services: metrics components to enable receivers: [otlp]
    processors: [batch]
    exporters: [azuredataexplorer]
    logs Services: logs components to enable receivers: [otlp]
    processors: [batch]
    exporters: [ azuredataexplorer]
  2. Use the "--config" flag to run the Azure Data Explorer exporter.

The following is an example configuration for the Azure Data Explorer exporter:

---
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
processors:
  batch:
exporters:
  azuredataexplorer:
    cluster_uri: "https://<cluster>.kusto.chinacloudapi.cn"
    application_id: "<application id>"
    application_key: "<application key>"
    tenant_id: "<application tenant>"
    db_name: "oteldb"
    metrics_table_name: "OTELMetrics"
    logs_table_name: "OTELLogs"
    traces_table_name: "OTELTraces"
    ingestion_type : "managed"
    metrics_table_json_mapping : "<json metrics_table_name mapping>"
    logs_table_json_mapping  : "<json logs_table_name mapping>"
    traces_table_json_mapping  : "<json traces_table_name mapping>"
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [azuredataexplorer]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [azuredataexplorer]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [azuredataexplorer]

Collect data with a sample application

Now that the collector is configured, you need to send data to be ingested. In this example. you use the sample spring pet clinic application with the java OTeL collector agent.

  1. Download the collector agent here: Open telemetry collector agent.

  2. To enable open telemetry for the sample application, set the following environment variables. The open-telemetry-collector-host references the host where the Azure Data Explorer exporter is configured and running.

    $env:OTEL_SERVICE_NAME="pet-clinic-service"
    $env:OTEL_TRACES_EXPORTER="otlp"
    $env:OTEL_LOGS_EXPORTER="otlp "                   
    $env:OTEL_EXPORTER_OTLP_ENDPOINT="http://<open-telemetry-collector-host>:4317"
    
  3. Run the sample spring-boot application with the following command line arguments:

    java -javaagent:./opentelemetry-javaagent.jar -jar spring-petclinic-<version>-SNAPSHOT.jar    
    

Query incoming data

Once the sample app has run, your data has been ingested into the defined tables in Azure Data Explorer. These tables were created in a database that was defined in the OTel collector configuration, as oteldb. The tables you've created were defined in the OTel collector configuration. In this example, you've created three tables: OTELMetrics, OTELLogs, and OTELTraces. In this section, you query each table separately to get a small selection of the available data.

  1. Browse to Azure Data Explorer web UI.

  2. Select Query from the left menu.

  3. Expand the target cluster in the left pane.

  4. Select the oteldb database to give your queries the correct context.

  5. Copy/paste the following queries sequentially, to see an arbitrary number of rows from each table:

    • Metrics

      OTELMetrics
      |take 2
      

      You should get results that are similar, but not exactly the same, as the following table:

      Timestamp MetricName MetricType MetricUnit MetricDescription MetricValue Host MetricAttributes ResourceAttributes
      2022-07-01T12:55:33Z http.server.active_requests Sum requests The number of concurrent HTTP requests that are currently in-flight 0 DESKTOP-SFS7RUQ {"http.flavor":"1.1", "http.host":"localhost:8080", "scope.name":"io.opentelemetry.tomcat-7.0", "scope.version":"1.14.0-alpha", "http.method":"GET", "http.scheme":"http"} {"host.name":"DESKTOP-SFS7RUQ", "process.command_line":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe -javaagent:./opentelemetry-javaagent.jar", "process.runtime.description":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 18.0.1.1+2-6", "process.runtime.version":"18.0.1.1+2-6", "telemetry.sdk.language":"java", "host.arch":"amd64", "process.runtime.name":"Java(TM) SE Runtime Environment", "telemetry.auto.version":"1.14.0", "telemetry.sdk.name":"opentelemetry", "os.type":"windows", "os.description":"Windows 11 10.0", "process.executable.path":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe", "process.pid":5980, "service.name":"my-service", "telemetry.sdk.version":"1.14.0"}
      2022-07-01T12:55:33Z http.server.duration_sum Histogram ms The duration of the inbound HTTP request(Sum total of samples) 114.9881 DESKTOP-SFS7RUQ {"http.flavor":"1.1", "http.host":"localhost:8080", "scope.name":"io.opentelemetry.tomcat-7.0", "scope.version":"1.14.0-alpha", "http.method":"GET", "http.scheme":"http", "http.route":"/owners/find", "http.status_code":200} {"host.name":"DESKTOP-SFS7RUQ", "process.command_line":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe -javaagent:./opentelemetry-javaagent.jar", "process.runtime.description":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 18.0.1.1+2-6", "process.runtime.version":"18.0.1.1+2-6", "telemetry.sdk.language":"java", "host.arch":"amd64", "process.runtime.name":"Java(TM) SE Runtime Environment", "telemetry.auto.version":"1.14.0", "telemetry.sdk.name":"opentelemetry", "os.type":"windows", "os.description":"Windows 11 10.0", "process.executable.path":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe", "process.pid":5980, "service.name":"my-service", "telemetry.sdk.version":"1.14.0"}
    • Logs

      OTELLogs
      |take 2
      

      You should get results that are similar, but not exactly the same, as the following table:

      Timestamp TraceId SpanId SeverityText SeverityNumber Body ResourceAttributes LogsAttributes
      2022-07-01T13:00:39Z INFO 9 Starting PetClinicApplication v2.7.0-SNAPSHOT using Java 18.0.1.1 on DESKTOP-SFS7RUQ with PID 37280 (C:\Users\adxuser\Documents\Repos\spring-petclinic\target\spring-petclinic-2.7.0-SNAPSHOT.jar started by adxuser in C:\Users\adxuser\Documents\Repos\spring-petclinic) {"host.name":"DESKTOP-SFS7RUQ", "process.executable.path":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe", "process.pid":37280, "process.runtime.description":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 18.0.1.1+2-6", "process.runtime.name":"Java(TM) SE Runtime Environment", "telemetry.sdk.name":"opentelemetry", "os.type":"windows", "process.runtime.version":"18.0.1.1+2-6", "telemetry.sdk.language":"java", "telemetry.sdk.version":"1.14.0", "process.command_line":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe -javaagent:./opentelemetry-javaagent.jar", "os.description":"Windows 11 10.0", "service.name":"my-service", "telemetry.auto.version":"1.14.0", "host.arch":"amd64"} {"scope.name":"org.springframework.samples.petclinic.PetClinicApplication"}
      2022-07-01T13:00:39Z INFO 9 No active profile set, falling back to 1 default profile: "default" {"host.name":"DESKTOP-SFS7RUQ", "process.executable.path":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe", "process.pid":37280, "process.runtime.description":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 18.0.1.1+2-6", "process.runtime.name":"Java(TM) SE Runtime Environment", "telemetry.sdk.name":"opentelemetry", "os.type":"windows", "process.runtime.version":"18.0.1.1+2-6", "telemetry.sdk.language":"java", "telemetry.sdk.version":"1.14.0", "process.command_line":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe -javaagent:./opentelemetry-javaagent.jar", "os.description":"Windows 11 10.0", "service.name":"my-service", "telemetry.auto.version":"1.14.0", "host.arch":"amd64"} {"scope.name":"org.springframework.samples.petclinic.PetClinicApplication"}
    • Traces

      OTELTraces
      |take 2
      

      You should get results that are similar, but not exactly the same, as the following table:

      TraceId SpanId ParentId SpanName SpanStatus SpanKind StartTime EndTime ResourceAttributes TraceAttributes Events Links
      573c0e4e002a9f7281f6d63eafe4ef87 dab70d0ba8902c5e 87d003d6-02c1-4f3d-8972-683243c35642 STATUS_CODE_UNSET SPAN_KIND_CLIENT 2022-07-01T13:17:59Z 2022-07-01T13:17:59Z {"telemetry.auto.version":"1.14.0", "os.description":"Windows 11 10.0", "process.executable.path":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe", "process.runtime.description":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 18.0.1.1+2-6", "service.name":"my-service", "process.runtime.name":"Java(TM) SE Runtime Environment", "telemetry.sdk.language":"java", "telemetry.sdk.name":"opentelemetry", "host.arch":"amd64", "host.name":"DESKTOP-SFS7RUQ", "process.pid":34316, "process.runtime.version":"18.0.1.1+2-6", "os.type":"windows", "process.command_line":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe -javaagent:./opentelemetry-javaagent.jar", "telemetry.sdk.version":"1.14.0"} {"db.user":"sa", "thread.id":1, "db.name":"87d003d6-02c1-4f3d-8972-683243c35642", "thread.name":"main", "db.system":"h2", "scope.name":"io.opentelemetry.jdbc", "scope.version":"1.14.0-alpha", "db.connection_string":"h2:mem:", "db.statement":"DROP TABLE vet_specialties IF EXISTS"} [] []
      84a9a8c4009d91476da02dfa40746c13 3cd4c0e91717969a 87d003d6-02c1-4f3d-8972-683243c35642 STATUS_CODE_UNSET SPAN_KIND_CLIENT 2022-07-01T13:17:59Z 2022-07-01T13:17:59Z {"telemetry.auto.version":"1.14.0", "os.description":"Windows 11 10.0", "process.executable.path":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe", "process.runtime.description":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 18.0.1.1+2-6", "service.name":"my-service", "process.runtime.name":"Java(TM) SE Runtime Environment", "telemetry.sdk.language":"java", "telemetry.sdk.name":"opentelemetry", "host.arch":"amd64", "host.name":"DESKTOP-SFS7RUQ", "process.pid":34316, "process.runtime.version":"18.0.1.1+2-6", "os.type":"windows", "process.command_line":"C:\Program Files\Java\jdk-18.0.1.1;bin;java.exe -javaagent:./opentelemetry-javaagent.jar", "telemetry.sdk.version":"1.14.0"} {"db.user":"sa", "thread.id":1, "db.name":"87d003d6-02c1-4f3d-8972-683243c35642", "thread.name":"main", "db.system":"h2", "scope.name":"io.opentelemetry.jdbc", "scope.version":"1.14.0-alpha", "db.connection_string":"h2:mem:", "db.statement":"DROP TABLE vets IF EXISTS"} [] []

Further data processing

Using update policies, the collected data can further be processed as per application need. For more information, see Update policy overview.

  1. The following example exports histogram metrics to a histo-specific table with buckets and aggregates. Run the following command in the query pane of the Azure Data Explorer web UI:

    .create table HistoBucketData (Timestamp: datetime, MetricName: string , MetricType: string , Value: double, LE: double, Host: string , ResourceAttributes: dynamic, MetricAttributes: dynamic )
    
    .create function 
    with ( docstring = "Histo bucket processing function", folder = "UpdatePolicyFunctions") ExtractHistoColumns()
    {
        OTELMetrics
        | where MetricType == 'Histogram' and MetricName has "_bucket"
        | extend f=parse_json(MetricAttributes)
        | extend le=todouble(f.le)
        | extend M_name=replace_string(MetricName, '_bucket','')
        | project Timestamp, MetricName=M_name, MetricType, MetricValue, LE=le, Host, ResourceAttributes, MetricAttributes
    }
    
    .alter table HistoBucketData policy update 
    @'[{ "IsEnabled": true, "Source": "OTELMetrics","Query": "ExtractHistoColumns()", "IsTransactional": false, "PropagateIngestionProperties": false}]'
    
  2. The following commands create a table that only contains count and sum values of Histogram metric type and attaches an update policy. Run the following command in the query pane of the Azure Data Explorer web UI:

     .create table HistoData (Timestamp: datetime, MetricName: string , MetricType: string , Count: double, Sum: double, Host: string , ResourceAttributes: dynamic, MetricAttributes: dynamic)
    
     .create function 
    with ( docstring = "Histo sum count processing function", folder = "UpdatePolicyFunctions") ExtractHistoCountColumns()
    {
       OTELMetrics
        | where MetricType =='Histogram'
        | where MetricName has "_count"
        | extend Count=MetricValue
        | extend M_name=replace_string(MetricName, '_bucket','')
        | join kind=inner (OTELMetrics
        | where MetricType =='Histogram'
        | where MetricName has "_sum"
        | project Sum = MetricValue , Timestamp)
     on Timestamp | project Timestamp, MetricName=M_name, MetricType, Count, Sum, Host, ResourceAttributes, MetricAttributes
    }
    
    .alter table HistoData policy update 
    @'[{ "IsEnabled": true, "Source": "RawMetricsData","Query": "ExtractHistoCountColumns()", "IsTransactional": false, "PropagateInge