监视 Azure FunctionsMonitor Azure Functions

Azure Functions 提供与 Azure Application Insights 的内置集成来监视函数。Azure Functions offers built-in integration with Azure Application Insights to monitor functions. 本文介绍如何配置 Azure Functions 来将系统生成的日志文件发送到 Application Insights。This article shows you how to configure Azure Functions to send system-generated log files to Application Insights.

建议使用 Application Insights,因为这可收集日志、性能和错误数据。We recommend using Application Insights because it collects log, performance, and error data. 它将自动检测性能异常,并且包含了强大的分析工具来帮助诊断问题,以及了解函数的使用方式。It automatically detects performance anomalies and includes powerful analytics tools to help you diagnose issues and to understand how your functions are used. Application Insights 有助于持续提高性能与可用性。It's designed to help you continuously improve performance and usability. 你甚至可以在本地函数应用项目开发过程中使用 Application Insights。You can even use Application Insights during local function app project development. 有关详细信息,请参阅什么是 Application Insights?For more information, see What is Application Insights?.

由于所需的 Application Insights 检测内置于 Azure Functions 中,因此只需一个有效的检测密钥即可将函数应用连接到 Application Insights 资源。As the required Application Insights instrumentation is built into Azure Functions, all you need is a valid instrumentation key to connect your function app to an Application Insights resource. 在 Azure 中创建函数应用资源后,应将检测密钥添加到应用程序设置中。The instrumentation key should be added to your application settings when your function app resource is created in Azure. 如果函数应用还没有此密钥,则可以进行手动设置If your function app doesn't already have this key, you can set it manually.

Application Insights 定价和限制Application Insights pricing and limits

可以免费试用 Application Insights 与 Azure Functions 的集成。You can try out Application Insights integration with Azure Functions for free. 每日可免费处理的数据量有限。There's a daily limit to how much data can be processed for free. 在测试期间可能会达到此限制。You might hit this limit during testing. 当你达到每日限制时,Azure 会提供门户和电子邮件通知。Azure provides portal and email notifications when you're approaching your daily limit. 如果未收到这些警报并达到限制,则新日志不会显示在 Application Insights 查询中。If you miss those alerts and hit the limit, new logs won't appear in Application Insights queries. 请注意限制,以免在故障排除上花费不必要的时间。Be aware of the limit to avoid unnecessary troubleshooting time. 有关详细信息,请参阅在 Application Insights 中管理定价和数据量For more information, see Manage pricing and data volume in Application Insights.

重要

Application Insights 具有采样功能,可以防止在峰值负载时为已完成的执行生成过多的遥测数据。Application Insights has a sampling feature that can protect you from producing too much telemetry data on completed executions at times of peak load. 采样功能默认处于启用状态。Sampling is enabled by default. 如果你看起来缺少数据,则可能需要调整采样设置来适应你的具体监视方案。If you appear to be missing data, you might need to adjust the sampling settings to fit your particular monitoring scenario. 若要了解详细信息,请参阅配置采样To learn more, see Configure sampling.

适用于 Azure Functions 的 Application Insights 支持的功能中详细介绍了函数应用可用的 Application Insights 功能的完整列表。The full list of Application Insights features available to your function app is detailed in Application Insights for Azure Functions supported features.

在“监视”选项卡中查看遥测数据View telemetry in Monitor tab

启用 Application Insights 集成后,可以在“监视”选项卡中查看遥测数据。With Application Insights integration enabled, you can view telemetry data in the Monitor tab.

  1. 在函数应用页中,选择在配置 Application Insights 之后至少运行过一次的函数。In the function app page, select a function that has run at least once after Application Insights was configured. 然后,在左窗格中选择“监视”。Then, select Monitor from the left pane. 定期选择“刷新”,直到显示函数调用列表。Select Refresh periodically, until the list of function invocations appears.

    调用列表

    备注

    在遥测客户端对数据进行批处理以传输到服务器时,最长可能需要 5 分钟的时间才会显示该列表。It can take up to five minutes for the list to appear while the telemetry client batches data for transmission to the server. 这种延迟不适用于实时指标流The delay doesn't apply to the Live Metrics Stream. 加载页面时,该服务会连接到 Functions 主机,因此,日志会直接流式传输到页面。That service connects to the Functions host when you load the page, so logs are streamed directly to the page.

  2. 若要查看特定函数调用的日志,选择该调用对应的“日期 (UTC)”列链接。To see the logs for a particular function invocation, select the Date (UTC) column link for that invocation. 该调用的日志记录输出显示在新页中。The logging output for that invocation appears in a new page.

    调用详细信息

  3. 选择“在 Application Insights 中运行”,以查看在 Azure 日志中检索 Azure Monitor 日志数据的查询源。Choose Run in Application Insights to view the source of the query that retrieves the Azure Monitor log data in Azure Log. 如果这是首次在订阅中使用 Azure Log Analytics,系统会要求启用它。If this is your first time using Azure Log Analytics in your subscription, you're asked to enable it.

  4. 启用 Log Analytics 后,将显示以下查询。After you enable Log Analytics, the following query is displayed. 可以看到查询结果仅限于最近 30 天 (where timestamp > ago(30d))。You can see that the query results are limited to the last 30 days (where timestamp > ago(30d)). 此外,结果最多显示 20 行 (take 20)。In addition, the results show no more than 20 rows (take 20). 相比之下,函数的调用详细信息列表显示最近 30 天的信息,没有任何限制。In contrast, the invocation details list for your function is for the last 30 days with no limit.

    Application Insights Analytics 调用列表

有关详细信息,请参阅本文稍后的查询遥测数据For more information, see Query telemetry data later in this article.

在 Application Insights 中查看遥测View telemetry in Application Insights

若要从 Azure 门户中的函数应用打开 Application Insights,请在左侧页面中的“设置”下选择“Application Insights” 。To open Application Insights from a function app in the Azure portal, select Application Insights under Settings in the left page. 如果这是首次将 Application Insights 与订阅一起使用,则系统会提示启用它:选择“打开 Application Insights”,然后在下一页上选择“应用” 。If this is your first time using Application Insights with your subscription, you'll be prompted to enable it: select Turn on Application Insights, and then select Apply on the next page.

从函数应用“概述”页面打开 Application Insights

有关如何使用 Application Insights 的信息,请参阅 Application Insights 文档For information about how to use Application Insights, see the Application Insights documentation. 本部分介绍如何在 Application Insights 中查看数据的一些示例。This section shows some examples of how to view data in Application Insights. 如果已经熟悉 Application Insights,则可以直接转到有关如何配置和自定义遥测数据的部分If you're already familiar with Application Insights, you can go directly to the sections about how to configure and customize the telemetry data.

Application Insights“概述”选项卡

在评估函数中的行为、性能和错误时,Application Insights 的以下方面可能会有所帮助:The following areas of Application Insights can be helpful when evaluating the behavior, performance, and errors in your functions:

调查Investigate 说明Description
失败Failures 基于函数失败和服务器异常来创建图表和警报。Create charts and alerts based on function failures and server exceptions. 操作名称是函数名称。The Operation Name is the function name. 不显示依赖项中的失败,除非为依赖项实现了自定义遥测。Failures in dependencies aren't shown unless you implement custom telemetry for dependencies.
性能Performance 通过查看每个 Cloud 角色实例的资源利用率和吞吐量来分析性能问题。Analyze performance issues by viewing resource utilization and throughput per Cloud role instances. 在函数阻碍基础资源的调试方案下,此数据非常有用。This data can be useful for debugging scenarios where functions are bogging down your underlying resources.
指标Metrics 创建基于指标的图表和警报。Create charts and alerts that are based on metrics. 指标包括函数调用次数、执行时间和成功率。Metrics include the number of function invocations, execution time, and success rates.
实时指标 Live Metrics 近实时地查看创建的指标数据。View metrics data as it's created in near real-time.

查询遥测数据Query telemetry data

借助 Application Insights Analytics,便可以访问数据库中以表形式存储的所有遥测数据。Application Insights Analytics gives you access to all telemetry data in the form of tables in a database. Analytics 提供了一种用于提取、处理和可视化数据的查询语言。Analytics provides a query language for extracting, manipulating, and visualizing the data.

选择“日志”以浏览或查询记录的事件。Choose Logs to explore or query for logged events.

Analytics 示例

下面是一个查询示例,它显示过去 30 分钟内每个辅助角色的请求的分布。Here's a query example that shows the distribution of requests per worker over the last 30 minutes.

requests
| where timestamp > ago(30m) 
| summarize count() by cloud_RoleInstance, bin(timestamp, 1m)
| render timechart

可用的表会显示在左侧的“架构”选项卡中。The tables that are available are shown in the Schema tab on the left. 可以在下表中找到由函数调用生成的数据:You can find data generated by function invocations in the following tables:

Table 说明Description
tracestraces 由运行时和函数代码创建的日志。Logs created by the runtime and by function code.
requestsrequests 一个请求用于一个函数调用。One request for each function invocation.
异常exceptions 由运行时引发的任何异常。Any exceptions thrown by the runtime.
customMetricscustomMetrics 成功和失败调用的计数、成功率和持续时间。The count of successful and failing invocations, success rate, and duration.
customEventscustomEvents 运行时跟踪的事件,例如:触发函数的 HTTP 请求。Events tracked by the runtime, for example: HTTP requests that trigger a function.
performanceCountersperformanceCounters 有关运行函数的服务器的性能的信息。Information about the performance of the servers that the functions are running on.

其他表适用于可用性测试、客户端和浏览器遥测。The other tables are for availability tests, and client and browser telemetry. 可以实现自定义遥测以向其中添加数据。You can implement custom telemetry to add data to them.

在每个表内,一些函数特定的数据位于 customDimensions 字段。Within each table, some of the Functions-specific data is in a customDimensions field. 例如,以下查询检索所有具有日志级别 Error 的跟踪。For example, the following query retrieves all traces that have log level Error.

traces 
| where customDimensions.LogLevel == "Error"

运行时提供了 customDimensions.LogLevelcustomDimensions.Category 字段。The runtime provides the customDimensions.LogLevel and customDimensions.Category fields. 可以在日志中提供在函数代码中编写的其他字段。You can provide additional fields in logs that you write in your function code. 请参阅本文后面部分中的结构化日志记录See Structured logging later in this article.

配置类别和日志级别Configure categories and log levels

可以使用 Application Insights 而无需任何自定义配置。You can use Application Insights without any custom configuration. 默认配置可能会产生大量数据。The default configuration can result in high volumes of data. 如果使用的是 Visual Studio Azure 订阅,可能会达到 Application Insights 的数据上限。If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. 本文后面的部分将介绍如何配置和自定义函数发送到 Application Insights 的数据。Later in this article, you learn how to configure and customize the data that your functions send to Application Insights. 对于函数应用,在 host.json 文件中配置日志记录。For a function app, logging is configured in the host.json file.

类别Categories

对于每个日志,Azure Functions 记录器都包含一个类别。The Azure Functions logger includes a category for every log. 类别指示运行时代码或函数代码的哪个部分编写日志。The category indicates which part of the runtime code or your function code wrote the log. 下表描述了运行时创建的日志的主要类别。The following chart describes the main categories of logs that the runtime creates.

类别Category 说明Description
Host.ResultsHost.Results 这些日志在 Application Insights 中显示为“requests”。These logs show as requests in Application Insights. 它们指示函数的成功或失败。They indicate success or failure of a function. 所有这些日志均在 Information 级别编写。All of these logs are written at Information level. 如果在 Warning 或更高级别进行筛选,则不会看到任何这些数据。If you filter at Warning or above, you won't see any of this data.
Host.AggregatorHost.Aggregator 这些日志在一段可配置的时间内提供函数调用的计数和平均值。These logs provide counts and averages of function invocations over a configurable period of time. 默认时段为 30 秒或 1,000 个结果,以先满足的条件为准。The default period is 30 seconds or 1,000 results, whichever comes first. 日志位于 Application Insights 中的 customMetrics 表内。The logs are available in the customMetrics table in Application Insights. 示例包括运行数、成功率和持续时间。Examples are the number of runs, success rate, and duration. 所有这些日志均在 Information 级别编写。All of these logs are written at Information level. 如果在 Warning 或更高级别进行筛选,则不会看到任何这些数据。If you filter at Warning or above, you won't see any of this data.

除了这些类别,其余类别的所有日志在 Application Insights 的“traces”表中提供。All logs for categories other than these are available in the traces table in Application Insights.

由 Functions 运行时编写以 Host 开头的类别的所有日志。All logs with categories that begin with Host are written by the Functions runtime. “函数已启动”和“函数已完成”日志的类别为 Host.ExecutorThe Function started and Function completed logs have category Host.Executor. 对于成功运行,这些日志属于 Information 级别。For successful runs, these logs are Information level. 异常在 Error 级别记录。Exceptions are logged at Error level. 运行时还创建 Warning 级别日志,例如:已发送到病毒邮件队列的队列邮件。The runtime also creates Warning level logs, for example: queue messages sent to the poison queue.

Functions 运行时创建具有以“Host”开头的类别的日志。The Functions runtime creates logs with a category that begin with "Host." 在版本 1.x 中,function startedfunction executedfunction completed 日志的类别为 Host.ExecutorIn version 1.x, the function started, function executed, and function completed logs have the category Host.Executor. 从版本 2.x 开始,这些日志的类别为 Function.<YOUR_FUNCTION_NAME>Starting in version 2.x, these logs have the category Function.<YOUR_FUNCTION_NAME>.

如果在函数代码中编写日志,则类别为 Function.<YOUR_FUNCTION_NAME>.User,并且可以是任何日志级别。If you write logs in your function code, the category is Function.<YOUR_FUNCTION_NAME>.User and can be any log level. 在 Functions 运行时的版本 1.x 中,类别为 FunctionIn version 1.x of the Functions runtime, the category is Function.

日志级别Log levels

此外,对于每个日志,Azure Functions 记录器都会包含一个日志级别。The Azure Functions logger also includes a log level with every log. LogLevel 是一个枚举,整数代码指示相对重要性:LogLevel is an enumeration, and the integer code indicates relative importance:

LogLevelLogLevel 代码Code
跟踪Trace 00
调试Debug 11
信息Information 22
警告Warning 33
错误Error 44
严重Critical 55
None 66

日志级别 None 将在下一节中进行介绍。Log level None is explained in the next section.

host.json 中的日志配置Log configuration in host.json

Host.json 文件配置函数应用发送到 Application Insights 的日志记录数量。The host.json file configures how much logging a function app sends to Application Insights. 对于每个类别,均可以指示要发送的最小日志级别。For each category, you indicate the minimum log level to send. 有两个示例:第一个示例针对 Functions 运行时版本 2.x 和更高版本(具有 .NET Core),第二个示例针对版本 1.x 运行时。There are two examples: the first example targets version 2.x and later of the Functions runtime (with .NET Core), and the second example is for the version 1.x runtime.

版本 2.x 和更高版本Version 2.x and higher

版本 v2.x 和更高版本的 Functions 运行时使用 .NET Core 日志记录筛选器层次结构Version v2.x and later versions of the Functions runtime use the .NET Core logging filter hierarchy.

{
  "logging": {
    "fileLoggingMode": "always",
    "logLevel": {
      "default": "Information",
      "Host.Results": "Error",
      "Function": "Error",
      "Host.Aggregator": "Trace"
    }
  }
}

版本 1.xVersion 1.x

{
  "logger": {
    "categoryFilter": {
      "defaultLevel": "Information",
      "categoryLevels": {
        "Host.Results": "Error",
        "Function": "Error",
        "Host.Aggregator": "Trace"
      }
    }
  }
}

此示例设置以下规则:This example sets up the following rules:

  • 对于 Host.ResultsFunction 类别的日志,仅将 Error 级别及更高级别发送到 Application Insights。For logs with category Host.Results or Function, send only Error level and above to Application Insights. Warning 级别及以下级别的日志将被忽略。Logs for Warning level and below are ignored.
  • 对于 Host.Aggregator 类别的日志,将所有日志发送到 Application Insights。For logs with category Host.Aggregator, send all logs to Application Insights. Trace 日志级别与某些记录器称为 Verbose 的日志级别相同,但在 host.json 文件中请使用 TraceThe Trace log level is the same as what some loggers call Verbose, but use Trace in the host.json file.
  • 对于所有其他日志,仅向 Application Insights 发送 Information 级别及更高级别。For all other logs, send only Information level and above to Application Insights.

host.json 中的类别值控制所有以相同值开头的类别的日志记录。The category value in host.json controls logging for all categories that begin with the same value. host.json 中的 Host 控制 Host.GeneralHost.ExecutorHost.Results 等等的日志记录。Host in host.json controls logging for Host.General, Host.Executor, Host.Results, and so on.

如果 host.json 包含以相同字符串开头的多个类别,则先匹配较长的类别。If host.json includes multiple categories that start with the same string, the longer ones are matched first. 假设想要让运行时中除 Host.Aggregator 之外的所有内容都在 Error 级别记录,而想让 Host.AggregatorInformation 级别记录:Suppose you want everything from the runtime except Host.Aggregator to log at Error level, but you want Host.Aggregator to log at the Information level:

版本 2.x 和更高版本Version 2.x and later

{
  "logging": {
    "fileLoggingMode": "always",
    "logLevel": {
      "default": "Information",
      "Host": "Error",
      "Function": "Error",
      "Host.Aggregator": "Information"
    }
  }
}

版本 1.xVersion 1.x

{
  "logger": {
    "categoryFilter": {
      "defaultLevel": "Information",
      "categoryLevels": {
        "Host": "Error",
        "Function": "Error",
        "Host.Aggregator": "Information"
      }
    }
  }
}

若要禁止某个类别的所有日志,可以使用日志级别 NoneTo suppress all logs for a category, you can use log level None. 不存在使用该类别编写的日志,该类别上也没有日志级别。No logs are written with that category and there's no log level above it.

配置聚合器Configure the aggregator

如前一部分中所述,运行时聚合一段时间内有关函数执行的数据。As noted in the previous section, the runtime aggregates data about function executions over a period of time. 默认时段为 30 秒或 1,000 次运行,以先满足的条件为准。The default period is 30 seconds or 1,000 runs, whichever comes first. 可以在 host.json 文件中配置此设置。You can configure this setting in the host.json file. 下面是一个示例:Here's an example:

{
    "aggregator": {
      "batchSize": 1000,
      "flushTimeout": "00:00:30"
    }
}

配置采样Configure sampling

Application Insights 具有采样功能,可以防止在峰值负载时为已完成的执行生成过多的遥测数据。Application Insights has a sampling feature that can protect you from producing too much telemetry data on completed executions at times of peak load. 当传入执行的速率超过指定的阈值时,Application Insights 开始随机忽略某些传入执行。When the rate of incoming executions exceeds a specified threshold, Application Insights starts to randomly ignore some of the incoming executions. 每秒执行的最大次数的默认设置为 20(版本 1.x 中为 5)。The default setting for maximum number of executions per second is 20 (five in version 1.x). 可以在 host.json 中配置采样。You can configure sampling in host.json. 下面是一个示例:Here's an example:

版本 2.x 和更高版本Version 2.x and later

{
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "maxTelemetryItemsPerSecond" : 20,
        "excludedTypes": "Request"
      }
    }
  }
}

在版本 2.x 中,可以从采样中排除某些类型的遥测。In version 2.x, you can exclude certain types of telemetry from sampling. 在上面的示例中,从采样中排除了 Request 类型的数据。In the example above, data of type Request are excluded from sampling. 这可确保记录所有函数执行(请求),而其他类型的遥测仍会受到采样的限制。This ensures all function executions (requests) are logged while other types of telemetry remain subject to sampling.

版本 1.xVersion 1.x

{
  "applicationInsights": {
    "sampling": {
      "isEnabled": true,
      "maxTelemetryItemsPerSecond" : 5
    }
  }
}

在 C# 函数中编写日志Write logs in C# functions

可以在 Application Insights 中显示为 traces 的函数代码中编写日志。You can write logs in your function code that appear as traces in Application Insights.

ILoggerILogger

在函数中使用 ILogger 参数,而不是 TraceWriter 参数。Use an ILogger parameter in your functions instead of a TraceWriter parameter. 通过使用 TraceWriter 创建的日志会转到 Application Insights,但借助 ILogger 可执行结构化日志记录Logs created by using TraceWriter go to Application Insights, but ILogger lets you do structured logging.

使用 ILogger 对象,可以调用 ILogger 上的 Log<level> 扩展方法来创建日志。With an ILogger object, you call Log<level> extension methods on ILogger to create logs. 下面的代码编写类别为“Function.<YOUR_FUNCTION_NAME>.User.”的 Information 日志The following code writes Information logs with category "Function.<YOUR_FUNCTION_NAME>.User."

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogger logger)
{
    logger.LogInformation("Request for item with key={itemKey}.", id);

结构化日志记录Structured logging

占位符的顺序(而不是其名称)确定日志消息中使用的参数。The order of placeholders, not their names, determines which parameters are used in the log message. 假设有以下代码:Suppose you have the following code:

string partitionKey = "partitionKey";
string rowKey = "rowKey";
logger.LogInformation("partitionKey={partitionKey}, rowKey={rowKey}", partitionKey, rowKey);

如果保留相同的消息字符串并颠倒参数的顺序,则生成的消息文本将在错误的位置生成值。If you keep the same message string and reverse the order of the parameters, the resulting message text would have the values in the wrong places.

以这种方式处理占位符,以便可以执行结构化日志记录。Placeholders are handled this way so that you can do structured logging. Application Insights 存储参数名称值对和消息字符串。Application Insights stores the parameter name-value pairs and the message string. 结果是消息参数变为可以查询的字段。The result is that the message arguments become fields that you can query on.

如果记录器方法调用类似于前面的示例,则可以查询字段 customDimensions.prop__rowKeyIf your logger method call looks like the previous example, you can query the field customDimensions.prop__rowKey. 添加 prop__ 前缀以确保运行时添加的字段和函数代码添加的字段之间没有冲突。The prop__ prefix is added to ensure there are no collisions between fields the runtime adds and fields your function code adds.

此外,可以通过引用字段 customDimensions.prop__{OriginalFormat} 查询原始消息字符串。You can also query on the original message string by referencing the field customDimensions.prop__{OriginalFormat}.

下面是 customDimensions 数据的示例 JSON 表示形式:Here's a sample JSON representation of customDimensions data:

{
  "customDimensions": {
    "prop__{OriginalFormat}":"C# Queue trigger function processed: {message}",
    "Category":"Function",
    "LogLevel":"Information",
    "prop__message":"c9519cbf-b1e6-4b9b-bf24-cb7d10b1bb89"
  }
}

自定义指标日志记录Custom metrics logging

在 C# 脚本函数中,可以使用 ILogger 上的 LogMetric 扩展方法来在 Application Insights 中创建自定义指标。In C# script functions, you can use the LogMetric extension method on ILogger to create custom metrics in Application Insights. 下面是示例方法调用:Here's a sample method call:

logger.LogMetric("TestMetric", 1234);

此代码是一种替代方法,使用适用于 .NET 的 Application Insights API 调用 TrackMetricThis code is an alternative to calling TrackMetric by using the Application Insights API for .NET.

在 JavaScript 函数中写入日志Write logs in JavaScript functions

在 Node.js 函数中,使用 context.log 编写日志。In Node.js functions, use context.log to write logs. 结构化日志记录未启用。Structured logging isn't enabled.

context.log('JavaScript HTTP trigger function processed a request.' + context.invocationId);

自定义指标日志记录Custom metrics logging

在 Functions 运行时的 1.x 版上运行时,Node.js 函数可以使用 context.log.metric 方法在 Application Insights 中创建自定义指标。When you're running on version 1.x of the Functions runtime, Node.js functions can use the context.log.metric method to create custom metrics in Application Insights. 版本 2.x 和更高版本目前不支持此方法。This method isn't currently supported in version 2.x and later. 下面是示例方法调用:Here's a sample method call:

context.log.metric("TestMetric", 1234);

此代码是一种替代方法,使用适用于 Application Insights 的 Node.js SDK 调用 trackMetricThis code is an alternative to calling trackMetric by using the Node.js SDK for Application Insights.

在 C# 函数中记录自定义遥测Log custom telemetry in C# functions

Functions 特定版本的 Application Insights SDK 可用于将自定义遥测数据从函数发送到 Application Insights:Microsoft.Azure.WebJobs.Logging.ApplicationInsightsThere is a Functions-specific version of the Application Insights SDK that you can use to send custom telemetry data from your functions to Application Insights: Microsoft.Azure.WebJobs.Logging.ApplicationInsights. 在命令提示符中使用以下命令来安装此包:Use the following command from the command prompt to install this package:

dotnet add package Microsoft.Azure.WebJobs.Logging.ApplicationInsights --version <VERSION>

在此命令中,将 <VERSION> 替换为此包的版本,该版本支持 Microsoft.Azure.WebJobs 的已安装版本。In this command, replace <VERSION> with a version of this package that supports your installed version of Microsoft.Azure.WebJobs.

以下 C# 示例使用自定义遥测 APIThe following C# examples uses the custom telemetry API. 示例针对的是 .NET 类库,但对于 C# 脚本,Application Insights 代码是相同的。The example is for a .NET class library, but the Application Insights code is the same for C# script.

版本 2.x 和更高版本Version 2.x and later

版本 2.x 和更高版本运行时使用 Application Insights 中的较新功能自动将遥测与当前操作进行关联。Version 2.x and later versions of the runtime use newer features in Application Insights to automatically correlate telemetry with the current operation. 不需要手动设置操作 IdParentIdName 字段。There's no need to manually set the operation Id, ParentId, or Name fields.

using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;

using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
using System.Linq;

namespace functionapp0915
{
    public class HttpTrigger2
    {
        private readonly TelemetryClient telemetryClient;

        /// Using dependency injection will guarantee that you use the same configuration for telemetry collected automatically and manually.
        public HttpTrigger2(TelemetryConfiguration telemetryConfiguration)
        {
            this.telemetryClient = new TelemetryClient(telemetryConfiguration);
        }

        [FunctionName("HttpTrigger2")]
        public Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
            HttpRequest req, ExecutionContext context, ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");
            DateTime start = DateTime.UtcNow;

            // Parse query parameter
            string name = req.Query
                .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
                .Value;

            // Track an Event
            var evt = new EventTelemetry("Function called");
            evt.Context.User.Id = name;
            this.telemetryClient.TrackEvent(evt);

            // Track a Metric
            var metric = new MetricTelemetry("Test Metric", DateTime.Now.Millisecond);
            metric.Context.User.Id = name;
            this.telemetryClient.TrackMetric(metric);

            // Track a Dependency
            var dependency = new DependencyTelemetry
            {
                Name = "GET api/planets/1/",
                Target = "swapi.co",
                Data = "https://swapi.co/api/planets/1/",
                Timestamp = start,
                Duration = DateTime.UtcNow - start,
                Success = true
            };
            dependency.Context.User.Id = name;
            this.telemetryClient.TrackDependency(dependency);

            return Task.FromResult<IActionResult>(new OkResult());
        }
    }
}

GetMetric 是当前建议用于创建指标的 API。GetMetric is the currently recommended API for creating a metric.

版本 1.xVersion 1.x

using System;
using System.Net;
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.Azure.WebJobs;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using System.Linq;

namespace functionapp0915
{
    public static class HttpTrigger2
    {
        private static string key = TelemetryConfiguration.Active.InstrumentationKey = 
            System.Environment.GetEnvironmentVariable(
                "APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);

        private static TelemetryClient telemetryClient = 
            new TelemetryClient() { InstrumentationKey = key };

        [FunctionName("HttpTrigger2")]
        public static async Task<HttpResponseMessage> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
            HttpRequestMessage req, ExecutionContext context, ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");
            DateTime start = DateTime.UtcNow;

            // Parse query parameter
            string name = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
                .Value;

            // Get request body
            dynamic data = await req.Content.ReadAsAsync<object>();

            // Set name to query string or body data
            name = name ?? data?.name;
         
            // Track an Event
            var evt = new EventTelemetry("Function called");
            UpdateTelemetryContext(evt.Context, context, name);
            telemetryClient.TrackEvent(evt);
            
            // Track a Metric
            var metric = new MetricTelemetry("Test Metric", DateTime.Now.Millisecond);
            UpdateTelemetryContext(metric.Context, context, name);
            telemetryClient.TrackMetric(metric);
            
            // Track a Dependency
            var dependency = new DependencyTelemetry
                {
                    Name = "GET api/planets/1/",
                    Target = "swapi.co",
                    Data = "https://swapi.co/api/planets/1/",
                    Timestamp = start,
                    Duration = DateTime.UtcNow - start,
                    Success = true
                };
            UpdateTelemetryContext(dependency.Context, context, name);
            telemetryClient.TrackDependency(dependency);
        }
        
        // Correlate all telemetry with the current Function invocation
        private static void UpdateTelemetryContext(TelemetryContext context, ExecutionContext functionContext, string userName)
        {
            context.Operation.Id = functionContext.InvocationId.ToString();
            context.Operation.ParentId = functionContext.InvocationId.ToString();
            context.Operation.Name = functionContext.FunctionName;
            context.User.Id = userName;
        }
    }    
}

请勿调用 TrackRequestStartOperation<RequestTelemetry>,因为将显示函数调用的重复请求。Don't call TrackRequest or StartOperation<RequestTelemetry> because you'll see duplicate requests for a function invocation. Functions 运行时自动跟踪请求。The Functions runtime automatically tracks requests.

不要设置 telemetryClient.Context.Operation.IdDon't set telemetryClient.Context.Operation.Id. 当多个函数同时运行时,这种全局设置会导致不正确的关联。This global setting causes incorrect correlation when many functions are running simultaneously. 请改为创建新的遥测实例(DependencyTelemetryEventTelemetry)并修改其 Context 属性。Instead, create a new telemetry instance (DependencyTelemetry, EventTelemetry) and modify its Context property. 然后将遥测实例传入到 TelemetryClient 的相应 Track 方法(TrackDependency()TrackEvent()TrackMetric())。Then pass in the telemetry instance to the corresponding Track method on TelemetryClient (TrackDependency(), TrackEvent(), TrackMetric()). 此方法可确保遥测具有当前函数调用的正确关联详细信息。This method ensures that the telemetry has the correct correlation details for the current function invocation.

在 JavaScript 函数中记录自定义遥测Log custom telemetry in JavaScript functions

下面是使用 Application Insights Node.js SDK 发送自定义遥测的示例代码片段:Here are sample code snippets that send custom telemetry with the Application Insights Node.js SDK:

版本 2.x 和更高版本Version 2.x and later

const appInsights = require("applicationinsights");
appInsights.setup();
const client = appInsights.defaultClient;

module.exports = function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    // Use this with 'tagOverrides' to correlate custom telemetry to the parent function invocation.
    var operationIdOverride = {"ai.operation.id":context.traceContext.traceparent};

    client.trackEvent({name: "my custom event", tagOverrides:operationIdOverride, properties: {customProperty2: "custom property value"}});
    client.trackException({exception: new Error("handled exceptions can be logged with this method"), tagOverrides:operationIdOverride});
    client.trackMetric({name: "custom metric", value: 3, tagOverrides:operationIdOverride});
    client.trackTrace({message: "trace message", tagOverrides:operationIdOverride});
    client.trackDependency({target:"http://dbname", name:"select customers proc", data:"SELECT * FROM Customers", duration:231, resultCode:0, success: true, dependencyTypeName: "ZSQL", tagOverrides:operationIdOverride});
    client.trackRequest({name:"GET /customers", url:"http://myserver/customers", duration:309, resultCode:200, success:true, tagOverrides:operationIdOverride});

    context.done();
};

版本 1.xVersion 1.x

const appInsights = require("applicationinsights");
appInsights.setup();
const client = appInsights.defaultClient;

module.exports = function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    // Use this with 'tagOverrides' to correlate custom telemetry to the parent function invocation.
    var operationIdOverride = {"ai.operation.id":context.operationId};

    client.trackEvent({name: "my custom event", tagOverrides:operationIdOverride, properties: {customProperty2: "custom property value"}});
    client.trackException({exception: new Error("handled exceptions can be logged with this method"), tagOverrides:operationIdOverride});
    client.trackMetric({name: "custom metric", value: 3, tagOverrides:operationIdOverride});
    client.trackTrace({message: "trace message", tagOverrides:operationIdOverride});
    client.trackDependency({target:"http://dbname", name:"select customers proc", data:"SELECT * FROM Customers", duration:231, resultCode:0, success: true, dependencyTypeName: "ZSQL", tagOverrides:operationIdOverride});
    client.trackRequest({name:"GET /customers", url:"http://myserver/customers", duration:309, resultCode:200, success:true, tagOverrides:operationIdOverride});

    context.done();
};

tagOverrides 参数将 operation_Id 设置为函数的调用 ID。The tagOverrides parameter sets the operation_Id to the function's invocation ID. 通过此设置,可为给定的函数调用关联所有自动生成的遥测和自定义遥测。This setting enables you to correlate all of the automatically generated and custom telemetry for a given function invocation.

依赖项Dependencies

Functions v2 自动收集 HTTP 请求、ServiceBus、EventHub 和 SQL 的依赖项。Functions v2 automatically collects dependencies for HTTP requests, ServiceBus, EventHub, and SQL.

可以编写自定义代码来显示这些依赖项。You can write custom code to show the dependencies. 有关示例,请参阅 C# 自定义遥测部分中的示例代码。For examples, see the sample code in the C# custom telemetry section. 该示例代码会导致 Application Insights 中出现如下图所示的应用程序映射:The sample code results in an application map in Application Insights that looks like the following image:

应用程序映射

备注

依赖项在“信息”级别编写。Dependencies are written at Information level. 如果在“警告”或更高级别进行筛选,则不会看到任何此类数据。If you filter at Warning or above, you won't see any of this data. 此外,自动收集依赖项的操作在非用户范围内进行。Also, automatic collection of dependencies happens at non-user scope. 因此,如果你想要捕获这些依赖项,请确保在 host.json 中的用户范围外(即 Function.<YOUR_FUNCTION_NAME>.User key 外部)将级别至少设置为“信息”。So make sure the level is set to at least Information outside the user scope in your host.json (i.e. outside the Function.<YOUR_FUNCTION_NAME>.User key) if you want those dependencies to be captured.

启用 Application Insights 集成Enable Application Insights integration

对于将数据发送到 Application Insights 的函数应用,它需要知道 Application Insights 资源的检测密钥。For a function app to send data to Application Insights, it needs to know the instrumentation key of an Application Insights resource. 该密钥必须位于名为 APPINSIGHTS_INSTRUMENTATIONKEY 的应用设置中。The key must be in an app setting named APPINSIGHTS_INSTRUMENTATIONKEY.

在 Azure 门户中创建函数应用时,请在命令行中使用 Azure Functions Core ToolsVisual Studio Code,默认情况下会启用 Application Insights 集成。When you create your function app in the Azure portal, from the command line by using Azure Functions Core Tools, or by using Visual Studio Code, Application Insights integration is enabled by default. Application Insights 资源的名称与函数应用的相同,并且在同一区域或最接近的区域中创建。The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.

门户中的新函数应用New function app in the portal

若要查看正在创建的 Application Insights 资源,请选择该资源,展开“Application Insights”窗口。To review the Application Insights resource being created, select it to expand the Application Insights window. 可以更改“新建资源名称”,或者在 Azure 地理位置中选择另一个需要在其中存储数据的“位置” 。You can change the New resource name or choose a different Location in an Azure geography where you want to store your data.

在创建函数应用时启用 Application Insights

选择“创建”时,Application Insights 资源通过函数应用创建,该函数应用在应用程序设置中设置了 APPINSIGHTS_INSTRUMENTATIONKEYWhen you choose Create, an Application Insights resource is created with your function app, which has the APPINSIGHTS_INSTRUMENTATIONKEY set in application settings. 一切准备就绪。Everything is ready to go.

添加到现有函数应用Add to an existing function app

使用 Visual Studio 创建函数应用时,必须创建 Application Insights 资源。When you create a function app using Visual Studio, you must create the Application Insights resource. 然后,可以添加该资源中的检测密钥,作为函数应用中的应用程序设置You can then add the instrumentation key from that resource as an application setting in your function app.

可以通过 Azure Functions 轻松地将 Application Insights 集成从 Azure 门户添加到某个函数应用。Azure Functions makes it easy to add Application Insights integration to a function app from the Azure portal.

  1. Azure 门户中,搜索并选择“函数应用”,然后选择你的函数应用。In the Azure portal, search for and select function app, and then choose your function app.

  2. 选择窗口顶部的“未配置 Application Insights”横幅。Select the Application Insights is not configured banner at the top of the window. 如果看不到此横幅,则应用可能已启用 Application Insights。If you don't see this banner, then your app might already have Application Insights enabled.

    从门户启用 Application Insights

  3. 展开“更改资源”,使用下表中指定的设置创建 Application Insights 资源。Expand Change your resource and create an Application Insights resource by using the settings specified in the following table.

    设置Setting 建议的值Suggested value 描述Description
    新资源名称New resource name 唯一的应用名称Unique app name 使用与函数应用相同的名称是最方便的,该名称在订阅中必须独一无二。It's easiest to use the same name as your function app, which must be unique in your subscription.
    位置Location 中国北部China North 尽可能使用函数应用所在的同一区域,或与该区域接近的区域。If possible, use the same region as your function app, or one that's close to that region.

    创建 Application Insights 资源

  4. 选择“应用”。Select Apply.

    Application Insights 资源在与函数应用相同的资源组和订阅中创建。The Application Insights resource is created in the same resource group and subscription as your function app. 创建资源后,关闭 Application Insights 窗口。After the resource is created, close the Application Insights window.

  5. 在函数应用中,选择“设置”下的“配置”,然后选择“应用程序设置”。 In your function app, select Configuration under Settings, and then select Application settings. 如果看到名为 APPINSIGHTS_INSTRUMENTATIONKEY 的设置,则表明已为在 Azure 中运行的函数应用启用 Application Insights 集成。If you see a setting named APPINSIGHTS_INSTRUMENTATIONKEY, Application Insights integration is enabled for your function app running in Azure.

早期版本的 Functions 使用了内置监视功能,现不再建议使用该功能。Early versions of Functions used built-in monitoring, which is no longer recommended. 为此类函数应用启用 Application Insights 集成时,还必须禁用内置日志记录功能When enabling Application Insights integration for such a function app, you must also disable built-in logging.

报告问题Report issues

若要报告 Functions 中的 Application Insights 集成问题,或提出建议或请求,请在 GitHub 中创建问题To report an issue with Application Insights integration in Functions, or to make a suggestion or request, create an issue in GitHub.

流式处理日志Streaming Logs

开发应用程序时,通常需要了解在 Azure 中运行时近实时地写入日志的内容。While developing an application, you often want to see what's being written to the logs in near real time when running in Azure.

可以通过两种方式查看由函数执行生成的日志文件流。There are two ways to view a stream of log files being generated by your function executions.

  • 内置日志流式处理:借助应用服务平台即可查看应用程序日志文件流。Built-in log streaming: the App Service platform lets you view a stream of your application log files. 这等效于在本地开发期间调试函数时以及在门户中使用“测试”选项卡时所显示的输出。This is equivalent to the output seen when you debug your functions during local development and when you use the Test tab in the portal. 此时将显示所有基于日志的信息。All log-based information is displayed. 有关详细信息,请参阅流式处理日志For more information, see Stream logs. 此流式处理方法仅支持单个实例。This streaming method supports only a single instance.

  • 实时指标流:当函数应用连接到 Application Insights 时,可以使用实时指标流在 Azure 门户中近实时地查看日志数据和其他指标。Live Metrics Stream: when your function app is connected to Application Insights, you can view log data and other metrics in near real-time in the Azure portal using Live Metrics Stream. 监视在消耗计划的多个实例上运行的函数时,请使用此方法。Use this method when monitoring functions running on multiple-instances in a Consumption plan. 此方法使用抽样数据This method uses sampled data.

可以在门户和大多数本地开发环境中查看日志流。Log streams can be viewed both in the portal and in most local development environments.

门户Portal

可以在门户中查看这两种类型的日志流。You can view both types of log streams in the portal.

内置日志流式处理Built-in log streaming

若要在门户中查看流式处理日志,请在函数应用中选择“平台功能”选项卡。To view streaming logs in the portal, select the Platform features tab in your function app. 然后,在“监视”下,选择“日志流式处理” 。Then, under Monitoring, choose Log streaming.

在门户中启用流式处理日志

这会将应用连接到日志流式处理服务,窗口中将显示应用程序日志。This connects your app to the log streaming service and application logs are displayed in the window. 可以在“应用程序日志”和“Web 服务器日志”之间切换 。You can toggle between Application logs and Web server logs.

在门户中查看流式处理日志

实时指标流Live Metrics Stream

若要查看应用的实时指标流,请选择函数应用的“概述”选项卡。To view the Live Metrics Stream for your app, select the Overview tab of your function app. Application Insights 启用后,“配置的功能”下将显示“Application Insights”链接 。When you have Application Insights enables, you see an Application Insights link under Configured features. 使用此链接将转到应用的“Application Insights”页。This link takes you to the Application Insights page for your app.

在 Application Insights 中,选择“实时指标流”。In Application Insights, select Live Metrics Stream. “示例遥测”下降显示采样日志条目Sampled log entries are displayed under Sample Telemetry.

在门户中查看实时指标流

Visual Studio CodeVisual Studio Code

若要在 Azure 中为函数应用启用流式处理日志,请执行以下操作:To turn on the streaming logs for your function app in Azure:

  1. 选择 F1 打开命令面板,然后搜索并运行命令“Azure Functions: 开始流式处理日志”。Select F1 to open the command palette, and then search for and run the command Azure Functions: Start Streaming Logs.

  2. 在 Azure 中选择函数应用,然后选择“是” 为函数应用启用应用程序日志记录。Select your function app in Azure, and then select Yes to enable application logging for the function app.

  3. 在 Azure 中触发函数。Trigger your functions in Azure. 请注意,日志数据显示在 Visual Studio Code 的“输出”窗口中。Notice that log data is displayed in the Output window in Visual Studio Code.

  4. 完成后,请记得运行命令“Azure Functions: 停止流式处理日志”为函数应用禁用日志记录。When you're done, remember to run the command Azure Functions: Stop Streaming Logs to disable logging for the function app.

Core ToolsCore Tools

内置日志流式处理Built-in log streaming

使用 logstream 选项开始接收在 Azure 中运行的特定函数应用的流式处理日志,如以下示例所示:Use the logstream option to start receiving streaming logs of a specific function app running in Azure, as in the following example:

func azure functionapp logstream <FunctionAppName>

实时指标流Live Metrics Stream

也可通过包括 --browser 选项在新的浏览器窗口中查看函数应用的实时指标流,如以下示例所示:You can also view the Live Metrics Stream for your function app in a new browser window by including the --browser option, as in the following example:

func azure functionapp logstream <FunctionAppName> --browser

Azure CLIAzure CLI

可以使用 Azure CLI 启用流式处理日志。You can enable streaming logs by using the Azure CLI. 使用以下命令登录,选择订阅并流式传输日志文件:Use the following commands to sign in, choose your subscription, and stream log files:

az login
az account list
az account set --subscription <subscriptionNameOrId>
az webapp log tail --resource-group <RESOURCE_GROUP_NAME> --name <FUNCTION_APP_NAME>

Azure PowerShellAzure PowerShell

可以使用 Azure PowerShell 启用流式处理日志。You can enable streaming logs by using Azure PowerShell. 对于 PowerShell,请使用 Set-AzWebApp 命令在函数应用上启用日志记录,如以下代码片段所示:For PowerShell, use the Set-AzWebApp command to enable logging on the function app, as shown in the following snippet:

# Enable Logs
Set-AzWebApp -RequestTracingEnabled $True -HttpLoggingEnabled $True -DetailedErrorLoggingEnabled $True -ResourceGroupName $ResourceGroupName -Name $AppName

有关详细信息,请参阅完整代码示例For more information, see the complete code example.

缩放控制器日志(预览)Scale controller logs (preview)

此功能为预览版。This feature is in preview.

Azure Functions 缩放控制器监视运行应用的 Azure Functions 主机的实例。The Azure Functions scale controller monitors instances of the Azure Functions host on which your app runs. 此控制器根据当前性能决定何时添加或删除实例。This controller makes decisions about when to add or remove instances based on current performance. 可以让缩放控制器将日志发出到 Application Insights 或 Blob 存储,以便更好地了解缩放控制器为函数应用做出的决策。You can have the scale controller emit logs to either Application Insights or to Blob storage to better understand the decisions the scale controller is making for your function app.

若要启用此功能,请添加一个名为 SCALE_CONTROLLER_LOGGING_ENABLED 的新应用程序设置。To enable this feature, add a new application setting named SCALE_CONTROLLER_LOGGING_ENABLED. 此设置的值必须采用基于以下规范的 <DESTINATION>:<VERBOSITY> 格式:The value of this setting must be of the format <DESTINATION>:<VERBOSITY>, based on the following:

<DESTINATION> 日志发送到的目标。The destination to which logs are sent. 有效值为 AppInsightsBlobValid values are AppInsights and Blob.
使用 AppInsights 时,请确保在函数应用中启用 Application InsightsWhen you use AppInsights, make sure Application Insights is enabled in your function app.
将目标设置为 Blob 时,将在名为 azure-functions-scale-controller 的 blob 容器中创建日志,该容器位于 AzureWebJobsStorage 应用程序设置中设置的默认存储帐户中。When you set the destination to Blob, logs are created in a blob container named azure-functions-scale-controller in the default storage account set in the AzureWebJobsStorage application setting.
<VERBOSITY> 指定日志记录级别。Specifies the level of logging. 支持的值为 NoneWarningVerboseSupported values are None, Warning, and Verbose.
设置为 Verbose 时,缩放控制器将记录辅助角色计数每次更改的原因,以及有关将这些因素纳入决策的触发器的信息。When set to Verbose, the scale controller logs a reason for every change in the worker count, as well as information about the triggers that factor into those decisions. 详细日志包含触发器警告和缩放控制器运行前后触发器使用的哈希。Verbose logs include trigger warnings and the hashes used by the triggers before and after the scale controller runs.

注意

请勿让缩放控制器日志记录处于启用状态。Don't leave scale controller logging enabled. 启用日志记录,直到收集到的数据足以了解缩放控制器的行为方式,然后将其禁用。Enable logging until you have collected enough data to understand how the scale controller is behaving, and then disable it.

例如,以下 Azure CLI 命令会启用从缩放控制器到 Application Insights 的详细日志记录:For example, the following Azure CLI command turns on verbose logging from the scale controller to Application Insights:

az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> \
--settings SCALE_CONTROLLER_LOGGING_ENABLED=AppInsights:Verbose

在此示例中,请将 <FUNCTION_APP_NAME><RESOURCE_GROUP_NAME> 分别替换为函数应用名称和资源组名称。In this example, replace <FUNCTION_APP_NAME> and <RESOURCE_GROUP_NAME> with the name of your function app and the resource group name, respectively.

以下 Azure CLI 命令通过将详细程度设置为 None 来禁用日志记录:The following Azure CLI command disables logging by setting the verbosity to None:

az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> \
--settings SCALE_CONTROLLER_LOGGING_ENABLED=AppInsights:None

还可以通过使用以下 Azure CLI 命令删除 SCALE_CONTROLLER_LOGGING_ENABLED 设置来禁用日志记录:You can also disable logging by removing the SCALE_CONTROLLER_LOGGING_ENABLED setting using the following Azure CLI command:

az functionapp config appsettings delete --name <FUNCTION_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> \
--setting-names SCALE_CONTROLLER_LOGGING_ENABLED

禁用内置日志记录Disable built-in logging

启用 Application Insights 时,请禁用使用 Azure 存储的内置日志记录。When you enable Application Insights, disable the built-in logging that uses Azure Storage. 内置日志记录对于使用轻工作负载测试非常有用,但不适合在高负载生产环境中使用。The built-in logging is useful for testing with light workloads, but isn't intended for high-load production use. 对于生产监视,建议使用 Application Insights。For production monitoring, we recommend Application Insights. 如果在生产环境中使用内置日志记录,日志记录可能因 Azure 存储限制而不完整。If built-in logging is used in production, the logging record might be incomplete because of throttling on Azure Storage.

若要禁用内置日志记录,请删除 AzureWebJobsDashboard 应用设置。To disable built-in logging, delete the AzureWebJobsDashboard app setting. 有关如何在 Azure 门户中删除应用设置的信息,请参阅如何管理函数应用的“应用程序设置”部分。For information about how to delete app settings in the Azure portal, see the Application settings section of How to manage a function app. 在删除应用设置之前,请确保同一函数应用中没有任何现有的函数将此设置用于 Azure 存储触发器或绑定。Before you delete the app setting, make sure no existing functions in the same function app use the setting for Azure Storage triggers or bindings.

后续步骤Next steps

有关详细信息,请参阅以下资源:For more information, see the following resources: