如何使用文本摘要
文本摘要旨在缩短用户认为太长而读不下去的内容。 抽取式和抽象式摘要都是将文章、论文或文档简缩成关键句子。
抽取式摘要:通过提取共同表示原始内容中最重要或相关信息的句子来生成摘要。
抽象式摘要:通过从文档中生成概括主旨的句子来生成摘要。
以查询为中心的摘要:允许在汇总时使用查询。
其中每项功能(如果指定)都可以围绕感兴趣的特定项进行汇总。
API 使用的 AI 模型由该服务提供,只需发送内容即可进行分析。
为便于导航,下面是指向每个服务相应部分的链接:
方面 | 部分 |
---|---|
抽取式 | 抽取式摘要 |
抽象式 | 抽象式摘要 |
以查询为中心 | 以查询为中心的摘要 |
功能
提示
如果想要开始使用这些功能,可以按照快速入门文章开始操作。 也可使用 Language Studio 提出示例请求,而无需编写代码。
提取式摘要 API 使用自然语言处理技术在非结构化文本文档中查找关键句子。 这些句子共同传达文档的主要理念。
提取式摘要返回排名分数作为系统响应的一部分,并返回提取的句子及其在原始文档中的位置。 排名分数是指确定句子相对于文档主要理念的相关程度的指标。 该模型为每个句子提供 0 到 1(含 0 和 1)之间的分数,并按请求返回分数最高的句子。 例如,如果请求一个包含三个句子的摘要,则该服务将返回分数最高的三个句子。
Azure AI 语言中还有另一项功能关键短语提取,其可提取关键信息。 当在关键短语提取和提取式摘要之间做出决策时,请考虑以下事项:
- 关键短语提取返回短语,而抽取式摘要返回句子。
- 抽取式摘要返回带有排名分数的句子,并根据请求返回排名靠前的句子。
- 抽取式摘要还返回以下位置信息:
- 偏移:每个提取的句子的开始位置。
- 长度:每个提取的句子的长度。
确定如何处理数据(可选)
提交数据
你将文档作为文本字符串提交到 API。 在收到请求时执行分析。 因为 API 是异步的,所以在发送 API 请求和接收结果之间可能存在延迟。
使用此功能时,API 结果在引入请求时的 24 小时内可用,并在响应中指示。 在此时间段后,结果将被清除,并且不再可用于检索。
获取文本摘要结果
从语言检测获得结果时,可以将结果流式传输到应用程序或将输出保存到本地系统上的文件中。
下面是可以提交汇总的内容的示例,该示例是使用 Microsoft 博客文章集成式 AI 的整体表示形式提取的。 本文只是一个示例,API 可以接受更长的输入文本。 有关详细信息,请参阅数据限制部分。
“在 Microsoft,我们一直在寻求超越现有技术的 AI,采取更全面、以人为中心的方法来学习和理解。” 作为 Azure AI 服务的首席技术官,我一直在与一群了不起的科学家和工程师合作,将这一探索变为现实。 “在我的角色中,我以独特的视角看待人类认知的三个属性之间的关系:语言文本 (X)、音频或视觉传感器信号(Y) 和多语言 (Z)。” 在所有这三个属性的交点,都有一些神奇之处,如图 1 所示,我们称之为 XYZ 代码,它是一种联合表示,可以创造出更强大的 AI,它能说、听、看和更好地理解人类。 我们相信 XYZ 代码将使我们能够实现长期愿景:跨领域迁移学习、跨越模式和语言。 目标是拥有可以联合学习表示以支持广泛的下游 AI 任务的预训练模型,就像人们现在所做的。 在过去的五年里,我们在会话语音识别、机器翻译、会话问答、机器阅读理解和图像字幕方面的基准测试中达到了人类的表现。 这五项突破为我们提供了强烈的信号,让我们朝着更雄心勃勃的愿望实现人工智能能力的飞跃,实现更接近人类学习和理解方式的多感官和多语言学习。 只要以下游 AI 任务中的外部知识源为基础,我相信联合 XYZ 代码是这一愿望的重要组成部分。”
收到请求后,通过为 API 后端创建作业来处理文本摘要 API 请求。 如果作业创建成功,将返回 API 的输出。 输出将可在 24 小时内用于检索。 在此之后,将清除输出。 由于多语言和表情符号支持,响应可能包含文本偏移。 有关详细信息,请参阅如何处理偏移。
使用上面的示例时,API 可能会返回以下汇总句子:
抽取式摘要:
- “在 Microsoft,我们一直在寻求超越现有技术的 AI,采取更全面、以人为中心的方法来学习和理解。”
- “我们相信 XYZ 代码将使我们能够实现长期愿景:跨领域迁移学习、跨越模式和语言。”
- 目标是拥有可以联合学习表示以支持广泛的下游 AI 任务的预训练模型,就像人们现在所做的。
抽象式摘要:
- “Microsoft 正在采取更全面的、以人为本的方法来学习和理解。 我们相信 XYZ 代码将使我们能够实现长期愿景:跨领域迁移学习、跨越模式和语言。 在过去的五年里,我们在以下几个方面的基准测试中达到了人类的表现。”
试用文本抽取式摘要
可以使用文本抽取式摘要来获取文章、论文或文档的摘要。 若要查看示例,请参阅快速入门文章。
你可以使用 sentenceCount
参数指导将返回的句子数,默认值为 3
。 范围为 1 到 20。
你还可以使用 sortby
参数指定提取的句子的返回顺序(Offset
或 Rank
),默认值为 Offset
。
参数值 | 说明 |
---|---|
级别 | 根据句子与输入文档的相关性(由服务决定)对句子进行排序。 |
Offset | 保持句子在输入文档中出现的原始顺序。 |
试用文本抽象式摘要
以下示例将帮助你开始使用文本抽象式摘要:
- 将下面的命令复制到文本编辑器中。 BASH 示例使用
\
续行符。 如果你的控制台或终端使用不同的续行符,请改用该续行符。
curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2023-04-01 \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
-d \
'
{
"displayName": "Text Abstractive Summarization Task Example",
"analysisInput": {
"documents": [
{
"id": "1",
"language": "en",
"text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
}
]
},
"tasks": [
{
"kind": "AbstractiveSummarization",
"taskName": "Text Abstractive Summarization Task 1",
}
]
}
'
必要时在命令中进行如下更改:
- 将值
your-language-resource-key
替换为你的值。 - 将请求 URL 的第一部分 (
your-language-resource-endpoint
) 替换为你的终结点 URL。
- 将值
打开命令提示符窗口(例如 BASH)。
将文本编辑器中的命令粘贴到命令提示符窗口,然后运行该命令。
从响应头获取
operation-location
。 该值类似于以下 URL:
https://<your-language-resource-endpoint>/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-10-01-preview
- 要获取请求的结果,请使用以下 cURL 命令。 请务必将
<my-job-id>
替换为从之前的operation-location
响应头中收到的数值 ID 值:
curl -X GET https://<your-language-resource-endpoint>/language/analyze-text/jobs/<my-job-id>?api-version=2022-10-01-preview \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>"
抽象式文本摘要示例 JSON 响应
{
"jobId": "cd6418fe-db86-4350-aec1-f0d7c91442a6",
"lastUpdateDateTime": "2022-09-08T16:45:14Z",
"createdDateTime": "2022-09-08T16:44:53Z",
"expirationDateTime": "2022-09-09T16:44:53Z",
"status": "succeeded",
"errors": [],
"displayName": "Text Abstractive Summarization Task Example",
"tasks": {
"completed": 1,
"failed": 0,
"inProgress": 0,
"total": 1,
"items": [
{
"kind": "AbstractiveSummarizationLROResults",
"taskName": "Text Abstractive Summarization Task 1",
"lastUpdateDateTime": "2022-09-08T16:45:14.0717206Z",
"status": "succeeded",
"results": {
"documents": [
{
"summaries": [
{
"text": "Microsoft is taking a more holistic, human-centric approach to AI. We've developed a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We've achieved human performance on benchmarks in conversational speech recognition, machine translation, ...... and image captions.",
"contexts": [
{
"offset": 0,
"length": 247
}
]
}
],
"id": "1"
}
],
"errors": [],
"modelVersion": "latest"
}
}
]
}
}
parameter | 说明 |
---|---|
-X POST <endpoint> |
指定用于访问 API 的终结点。 |
-H Content-Type: application/json |
用于发送 JSON 数据的内容类型。 |
-H "Ocp-Apim-Subscription-Key:<key> |
指定用于访问 API 的密钥。 |
-d <documents> |
包含要发送的文档的 JSON。 |
以下 cURL 命令从 BASH shell 中执行。 请使用自己的资源名称、资源密钥和 JSON 值编辑这些命令。
基于查询的摘要
基于查询的文本摘要 API 是对现有文本摘要 API 的扩展。
最大的区别是请求正文中的新 query
字段(在 tasks
>parameters
>query
下面)。
提示
基于查询的摘要在使用长度控制方面具有差别,具体取决于所使用的基于查询的摘要类型:
- 基于查询的抽取式摘要通过指定 sentenceCount 来支持长度控制。
- 基于查询的抽象摘要不支持长度控制。
下面是一个请求示例:
curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2023-11-15-preview \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
-d \
'
{
"displayName": "Text Extractive Summarization Task Example",
"analysisInput": {
"documents": [
{
"id": "1",
"language": "en",
"text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
}
]
},
"tasks": [
{
"kind": "AbstractiveSummarization",
"taskName": "Query-based Abstractive Summarization",
"parameters": {
"query": "XYZ-code",
"summaryLength": "short"
}
}, {
"kind": "ExtractiveSummarization",
"taskName": "Query_based Extractive Summarization",
"parameters": {
"query": "XYZ-code"
}
}
]
}
'
摘要长度控制
在抽象摘要中使用 summaryLength 参数
如果未指定 summaryLength
,则模型将确定摘要长度。
对于 summaryLength
参数,接受三个值:
- oneSentence:生成主要由 1 个句子组成的摘要,大约有 80 个标记。
- short:生成主要由 2-3 个句子组成的摘要,大约有 120 个标记。
- medium:生成主要由 4-6 个句子组成的摘要,大约有 170 个标记。
- long:生成大部分超过 7 个句子的摘要,大约有 210 个标记。
下面是一个请求示例:
curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2023-04-01 \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
-d \
'
{
"displayName": "Text Abstractive Summarization Task Example",
"analysisInput": {
"documents": [
{
"id": "1",
"language": "en",
"text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
}
]
},
"tasks": [
{
"kind": "AbstractiveSummarization",
"taskName": "Length controlled Abstractive Summarization",
"parameters": {
"sentenceLength": "short"
}
}
]
}
'
在抽取式摘要中使用 sentenceCount 参数
对于 sentenceCount
参数,可以输入值 1-20 以指示所需的输出句子数。
下面是一个请求示例:
curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2023-11-15-preview \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" \
-d \
'
{
"displayName": "Text Extractive Summarization Task Example",
"analysisInput": {
"documents": [
{
"id": "1",
"language": "en",
"text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
}
]
},
"tasks": [
{
"kind": "ExtractiveSummarization",
"taskName": "Length controlled Extractive Summarization",
"parameters": {
"sentenceCount": "5"
}
}
]
}
'
服务和数据限制
有关每分钟和每秒可以发送的请求大小和数量信息,请参阅服务限制一文。