Query Prometheus metrics using Azure workbooks
Create dashboards powered by Azure Monitor managed service for Prometheus using Azure Workbooks. This article introduces workbooks for Azure Monitor workspaces and shows you how to query Prometheus metrics using Azure workbooks and the Prometheus query language (PromQL).
Prerequisites
To query Prometheus metrics from an Azure Monitor workspace, you need the following:
- An Azure Monitor workspace. To create an Azure Monitor workspace, see Create an Azure Monitor Workspace.
- Your Azure Monitor workspace must be collecting Prometheus metrics from an AKS cluster.
- The user must be assigned role that can perform the microsoft.monitor/accounts/read operation on the Azure Monitor workspace.
Prometheus Explorer workbook
Azure Monitor workspaces include an exploration workbook to query your Prometheus metrics.
- From the Azure Monitor workspace overview page, select Prometheus explorer
- Or the Workbooks menu item, and in the Azure Monitor workspace gallery, select the Prometheus Explorer workbook tile.
A workbook has the following input options:
- Time Range. Select the period of time that you want to include in your query. Select Custom to set a start and end time.
- PromQL. Enter the PromQL query to retrieve your data. For more information about PromQL, see Querying Prometheus.
- Graph, Grid, and Dimensions tabs. Switch between a graphic, tabular, and dimensional view of the query output.
Create a Prometheus workbook
Workbooks support many visualizations and Azure integrations. For more information about Azure Workbooks, see Creating an Azure Workbook.
From your Azure Monitor workspace, select Workbooks.
Select New.
In the new workbook, select Add, and select Add query from the dropdown.
Azure Workbooks use data sources to set the source scope the data they present. To query Prometheus metrics, select the Data source dropdown, and choose Prometheus .
From the Azure Monitor workspace dropdown, select your workspace.
Select your query type from Prometheus query type dropdown.
Write your PromQL query in the Prometheus Query field.
Select Run Query button.
Select the Done Editing at the bottom of the section and save your work
Troubleshooting
If you receive a message indicating that "You currently do not have any Prometheus data ingested to this Azure Monitor workspace":
- Verify that you have turned on metrics collection in the Monitored clusters blade of your Azure Monitor workspace.
If your workbook query does not return data with a message "You do not have query access":
- Check that you have sufficient permissions to perform microsoft.monitor/accounts/read assigned through Access Control (IAM) in your Azure Monitor workspace.
- Confirm if your Networking settings support query access. You might need to enable private access through your private endpoint or change settings to allow public access.
- If you have ad block enabled in your browser, you might need to pause or disable and refresh the workbook in order to view data.
Frequently asked questions
This section provides answers to common questions.
I am missing all or some of my metrics. How can I troubleshoot?
You can use the troubleshooting guide for ingesting Prometheus metrics from the managed agent here.
Why am I missing metrics that have two labels with the same name but different casing?
Azure managed Prometheus is a case insensitive system. It treats strings, such as metric names, label names, or label values, as the same time series if they differ from another time series only by the case of the string. For more information, see Prometheus metrics overview.
I see some gaps in metric data, why is this occurring?
During node updates, you might see a 1-minute to 2-minute gap in metric data for metrics collected from our cluster level collectors. This gap occurs because the node that the data runs on is being updated as part of a normal update process. This update process affects cluster-wide targets such as kube-state-metrics and custom application targets that are specified. This occurs when your cluster is updated manually or via autoupdate. This behavior is expected and occurs due to the node it runs on being updated. This behavior doesn't affect any of our recommended alert rules.