Databricks Runtime 7.0(不受支持)Databricks Runtime 7.0 (Unsupported)
Databricks 于 2020 年 6 月发布了此映像。Databricks released this image in June 2020.
以下发行说明提供 Apache Spark 3.0 支持的 Databricks Runtime 7.0 的相关信息。The following release notes provide information about Databricks Runtime 7.0, powered by Apache Spark 3.0.
新增功能New features
Databricks Runtime 7.0 包括以下新功能:Databricks Runtime 7.0 includes the following new features:
Scala 2.12Scala 2.12
Databricks Runtime 7.0 将 Scala 从 2.11.12 升级到 2.12.10。Databricks Runtime 7.0 upgrades Scala from 2.11.12 to 2.12.10. Scala 2.12 和 2.11 之间的更改列表位于 Scala 2.12.0 发行说明中。The change list between Scala 2.12 and 2.11 is in the Scala 2.12.0 release notes.
在 Databricks Runtime 6.4 中发布的 自动加载程序(公共预览版) 已在 Databricks Runtime 7.0 中得到改进Auto Loader (Public Preview), released in Databricks Runtime 6.4, has been improved in Databricks Runtime 7.0
当新数据文件在 ETL 过程中到达云 blob 存储时,你可以借助自动加载程序更高效地以增量方式处理这些数据文件。Auto Loader gives you a more efficient way to process new data files incrementally as they arrive on a cloud blob store during ETL. 这是对基于文件的结构化流式处理的改进,它通过重复列出云目录和跟踪已看到的文件来标识新文件。随着目录的增长,它的效率可能会非常低。This is an improvement over file-based structured streaming, which identifies new files by repeatedly listing the cloud directory and tracking the files that have been seen, and can be very inefficient as the directory grows. 与基于文件通知的结构化流式处理相比,自动加载程序也更方便、更有效,这要求你在云中手动配置文件通知服务,而不允许回填现有文件。Auto Loader is also more convenient and effective than file-notification-based structured streaming, which requires that you manually configure file-notification services on the cloud and doesn’t let you backfill existing files. 有关详细信息,请参阅使用自动加载程序从 Azure Blob 存储、Azure Data Lake Storage Gen1(受限)或 Azure Data Lake Storage Gen2 加载文件。For details, see Load files from Azure Blob storage, Azure Data Lake Storage Gen1 (limited), or Azure Data Lake Storage Gen2 using Auto Loader.
在 Databricks Runtime 7.0 上,不再需要请求自定义 Databricks Runtime 映像即可使用自动加载程序。On Databricks Runtime 7.0 you no longer need to request a custom Databricks Runtime image in order to use Auto Loader.
COPY INTO
(公共预览版) ,已在 Databricks Runtime 7.0 中得到改进,它允许你通过幂等重试将数据加载到 Delta Lake。COPY INTO
(Public Preview), which lets you load data into Delta Lake with idempotent retries, has been improved in Databricks Runtime 7.0COPY INTO
SQL 命令在 6.4 Databricks Runtime 中以公共预览版形式发布,允许你通过幂等重试将数据加载到 Delta Lake。Released as a Public Preview in Databricks Runtime 6.4, theCOPY INTO
SQL command lets you load data into Delta Lake with idempotent retries. 若要将数据加载到 Delta Lake,现在必须使用 Apache Spark 数据帧 API。To load data into Delta Lake today you have to use Apache Spark DataFrame APIs. 如果在加载过程中出现故障,则必须有效地处理它们。If there are failures during loads, you have to handle them effectively. 新的COPY INTO
命令提供了一种熟悉的声明性接口,用于通过 SQL 加载数据。The newCOPY INTO
command provides a familiar declarative interface to load data in SQL. 此命令会跟踪以前加载的文件,在出现故障时你可以安全地重新运行它。The command keeps track of previously loaded files and you safely re-run it in case of failures. 有关详细信息,请参阅 COPY INTO(Azure Databricks 上的 Delta Lake)。For details, see COPY INTO (Delta Lake on Azure Databricks).
改进Improvements
Azure Synapse(以前称为 SQL 数据仓库)连接器支持
COPY
语句。Azure Synapse (formerly SQL Data Warehouse) connector supports theCOPY
statement.COPY
的主要优点是较低权限的用户可以将数据写入到 Azure Synapse,而无需在 Azure Synapse 上拥有严格的CONTROL
权限。The main benefit ofCOPY
is that lower privileged users can write data to Azure Synapse without needing strictCONTROL
permissions on Azure Synapse.在笔记本单元格中以内联方式显示 Matplolib 对象不再需要
%matplotlib inline
magic 命令。The%matplotlib inline
magic command is no longer required to display Matplolib objects inline in notebook cells. 默认情况下,它们始终以内联方式显示。They are always displayed inline by default.现在可以使用
transparent=False
来呈现 Matplolib 图,这样就不会丢失用户指定的背景。Matplolib figures are now rendered withtransparent=False
, so that user-specified backgrounds are not lost. 可以通过设置 Spark 配置spark.databricks.workspace.matplotlib.transparent true
来重写此行为。This behavior can be overridden by setting Spark configurationspark.databricks.workspace.matplotlib.transparent true
.在“高并发性”模式群集上运行结构化流式处理生产作业时,作业的重启有时会失败,因为以前运行的作业未正确终止。When running Structured Streaming production jobs on High Concurrency mode clusters, restarts of a job would occasionally fail, because the previously running job wasn’t terminated properly. Databricks Runtime 6.3 引入了在群集上设置 SQL 配置
spark.sql.streaming.stopActiveRunOnRestart true
的功能,以确保以前的运行停止。Databricks Runtime 6.3 introduced the ability to set the SQL configurationspark.sql.streaming.stopActiveRunOnRestart true
on your cluster to ensure that the previous run stops. 默认情况下,会在 Databricks Runtime 7.0 中设置此配置。This configuration is set by default in Databricks Runtime 7.0.
主要库更改Major library changes
Python 包Python packages
升级的主要 Python 包:Major Python packages upgraded:
- boto3 1.9.162 -> 1.12.0boto3 1.9.162 -> 1.12.0
- matplotlib 3.0.3 -> 3.1.3matplotlib 3.0.3 -> 3.1.3
- numpy 1.16.2 -> 1.18.1numpy 1.16.2 -> 1.18.1
- pandas 0.24.2 -> 1.0.1pandas 0.24.2 -> 1.0.1
- pip 19.0.3 -> 20.0.2pip 19.0.3 -> 20.0.2
- pyarrow 0.13.0 -> 0.15.1pyarrow 0.13.0 -> 0.15.1
- psycopg2 2.7.6 -> 2.8.4psycopg2 2.7.6 -> 2.8.4
- scikit-learn 0.20.3 -> 0.22.1scikit-learn 0.20.3 -> 0.22.1
- scipy 1.2.1 -> 1.4.1scipy 1.2.1 -> 1.4.1
- seaborn 0.9.0 -> 0.10.0seaborn 0.9.0 -> 0.10.0
删除的 Python 包:Python packages removed:
- boto(使用 boto3)boto (use boto3)
- pycurlpycurl
备注
Databricks Runtime 7.0 中的 Python 环境使用 Python 3.7,这不同于安装的 Ubuntu 系统 Python:/usr/bin/python
和 /usr/bin/python2
关联到 Python 2.7,/usr/bin/python3
关联到 Python 3.6。The Python environment in Databricks Runtime 7.0 uses Python 3.7, which is different from the installed Ubuntu system Python: /usr/bin/python
and /usr/bin/python2
are linked to Python 2.7 and /usr/bin/python3
is linked to Python 3.6.
R 包R packages
添加的 R 包:R packages added:
- broombroom
- highrhighr
- isobandisoband
- knitrknitr
- markdownmarkdown
- modelrmodelr
- reprexreprex
- rmarkdownrmarkdown
- rvestrvest
- selectrselectr
- tidyversetidyverse
- tinytextinytex
- xfunxfun
删除的 R 包:R packages removed:
- abindabind
- bitopsbitops
- carcar
- carDatacarData
- doMCdoMC
- gbmgbm
- h2oh2o
- littlerlittler
- lme4lme4
- mapprojmapproj
- mapsmaps
- maptoolsmaptools
- MatrixModelsMatrixModels
- minqaminqa
- mvtnormmvtnorm
- nloptrnloptr
- openxlsxopenxlsx
- pbkrtestpbkrtest
- pkgKittenpkgKitten
- quantregquantreg
- R.methodsS3R.methodsS3
- R.ooR.oo
- R.utilsR.utils
- RcppEigenRcppEigen
- RCurlRCurl
- riorio
- spsp
- SparseMSparseM
- statmodstatmod
- zipzip
Java 和 Scala 库Java and Scala libraries
- 用于处理 Hive 用户定义函数和 Hive SerDes 的 Apache Hive 版本已升级到 2.3。Apache Hive version used for handling Hive user-defined functions and Hive SerDes upgraded to 2.3.
- 以前,Azure 存储和 Key Vault jar 作为 Databricks Runtime 的一部分打包,这会阻止你使用附加到群集的这些库的不同版本。Previously Azure Storage and Key Vault jars were packaged as part of Databricks Runtime, which would prevent you from using different versions of those libraries attached to clusters.
com.microsoft.azure.storage
和com.microsoft.azure.keyvault
下的类不再位于 Databricks Runtime 中的类路径上。Classes undercom.microsoft.azure.storage
andcom.microsoft.azure.keyvault
are no longer on the class path in Databricks Runtime. 如果你依赖于这些类路径中的任何一个,则现在必须将 Azure 存储 SDK 或 Azure Key Vault SDK 附加到群集。If you depend on either of those class paths, you must now attach Azure Storage SDK or Azure Key Vault SDK to your clusters.
行为更改Behavior changes
本部分列出从 Databricks Runtime 6.6 到 Databricks Runtime 7.0 的行为变更。This section lists behavior changes from Databricks Runtime 6.6 to Databricks Runtime 7.0. 在将工作负载从较低的 Databricks Runtime 版本迁移到 Databricks Runtime 7.0 及更高版本时,应注意到这些行为变更。You should be aware of these as you migrate workloads from lower Databricks Runtime releases to Databricks Runtime 7.0 and above.
Spark 行为变更Spark behavior changes
由于 Databricks Runtime 7.0 是在 Spark 3.0 上构建的第一个 Databricks Runtime,因此从在 Spark 2.4 上构建的 Databricks Runtime 5.5 LTS 或 6.x 迁移工作负载时,有许多更改应予以注意。Because Databricks Runtime 7.0 is the first Databricks Runtime built on Spark 3.0, there are many changes that you should be aware of when you migrate workloads from Databricks Runtime 5.5 LTS or 6.x, which are built on Spark 2.4. 此发行说明文章的 Apache Spark 部分的每个功能区域的“行为变更”部分中列出了这些更改:These changes are listed in the “Behavior changes” section of each functional area in the Apache Spark section of this release notes article:
- Spark Core、Spark SQL 和结构化流式处理的行为变更Behavior changes for Spark core, Spark SQL, and Structured Streaming
- MLlib 的行为变更Behavior changes for MLlib
- SparkR 的行为变更Behavior changes for SparkR
其他行为变更Other behavior changes
升级到 Scala 2.12 涉及以下更改:The upgrade to Scala 2.12 involves the following changes:
包单元序列化的处理方式不同。Package cell serialization is handled differently. 下面的示例演示了行为变更以及如何处理它。The following example illustrates the behavior change and how to handle it.
运行以下包单元中定义的
foo.bar.MyObjectInPackageCell.run()
会触发“java.lang.NoClassDefFoundError: Could not initialize class foo.bar.MyObjectInPackageCell$
”错误Runningfoo.bar.MyObjectInPackageCell.run()
as defined in the following package cell will trigger the errorjava.lang.NoClassDefFoundError: Could not initialize class foo.bar.MyObjectInPackageCell$
package foo.bar case class MyIntStruct(int: Int) import org.apache.spark.sql.SparkSession import org.apache.spark.sql.functions._ import org.apache.spark.sql.Column object MyObjectInPackageCell extends Serializable { // Because SparkSession cannot be created in Spark executors, // the following line triggers the error // Could not initialize class foo.bar.MyObjectInPackageCell$ val spark = SparkSession.builder.getOrCreate() def foo: Int => Option[MyIntStruct] = (x: Int) => Some(MyIntStruct(100)) val theUDF = udf(foo) val df = { val myUDFInstance = theUDF(col("id")) spark.range(0, 1, 1, 1).withColumn("u", myUDFInstance) } def run(): Unit = { df.collect().foreach(println) } }
若要解决此错误,可以将
MyObjectInPackageCell
包装到可序列化的类中。To work around this error, you can wrapMyObjectInPackageCell
inside a serializable class.某些使用
DataStreamWriter.foreachBatch
的情况将需要源代码更新。Certain cases usingDataStreamWriter.foreachBatch
will require a source code update. 此更改的原因是 Scala 2.12 会自动将 lambda 表达式转换为 SAM 类型,而这可能会导致多义性。This change is due to the fact that Scala 2.12 has automatic conversion from lambda expressions to SAM types and can cause ambiguity.例如,以下 Scala 代码无法编译:For example, the following Scala code can’t compile:
streams .writeStream .foreachBatch { (df, id) => myFunc(df, id) }
若要修复此编译错误,请将
foreachBatch { (df, id) => myFunc(df, id) }
更改为foreachBatch(myFunc _)
或显式使用 Java API:foreachBatch(new VoidFunction2 ...)
。To fix the compilation error, changeforeachBatch { (df, id) => myFunc(df, id) }
toforeachBatch(myFunc _)
or use the Java API explicitly:foreachBatch(new VoidFunction2 ...)
.
由于用于处理 Hive 用户定义函数和 Hive SerDes 的 Apache Hive 版本已升级到 2.3,因此需要两项更改:Because the Apache Hive version used for handling Hive user-defined functions and Hive SerDes is upgraded to 2.3, two changes are required:
- Hive 的
SerDe
接口由AbstractSerDe
抽象类取代。Hive’sSerDe
interface is replaced by an abstract classAbstractSerDe
. 对于任何自定义 HiveSerDe
实现,需要迁移到AbstractSerDe
。For any custom HiveSerDe
implementation, migrating toAbstractSerDe
is required. - 将
spark.sql.hive.metastore.jars
设置为builtin
意味着将使用 Hive 2.3 元存储客户端来访问 Databricks Runtime 7.0 的元存储。Settingspark.sql.hive.metastore.jars
tobuiltin
means that the Hive 2.3 metastore client will be used to access metastores for Databricks Runtime 7.0. 如果需要访问基于 Hive 1.2 的外部元存储,请将spark.sql.hive.metastore.jars
设置为包含 Hive 1.2 jar 的文件夹。If you need to access Hive 1.2 based external metastores, setspark.sql.hive.metastore.jars
to the folder that contains Hive 1.2 jars.
- Hive 的
弃用的功能和删除的功能Deprecations and removals
- “跳过数据”索引已在 Databricks Runtime 4.3 中弃用,并已在 Databricks Runtime 7.0 中删除。Data skipping index was deprecated in Databricks Runtime 4.3 and removed in Databricks Runtime 7.0. 建议改用 Delta 表,它提供了经过改进的数据跳过功能。We recommend that you use Delta tables instead, which offer improved data skipping capabilities.
- 在 Databricks Runtime 7.0 中,Apache Spark 的基础版本使用 Scala 2.12。In Databricks Runtime 7.0, the underlying version of Apache Spark uses Scala 2.12. 由于针对 Scala 2.11 编译的库可能会以意外的方式禁用 Databricks Runtime 7.0 群集,运行 Databricks Runtime 7.0 及更高版本的群集不会安装配置为在所有群集上安装的库。Since libraries compiled against Scala 2.11 can disable Databricks Runtime 7.0 clusters in unexpected ways, clusters running Databricks Runtime 7.0 and above do not install libraries configured to be installed on all clusters. 群集“库”选项卡显示状态“
Skipped
”和一条说明库处理方式更改的“弃用”消息。The cluster Libraries tab shows a statusSkipped
and a deprecation message that explains the changes in library handling. 但是,如果你的群集是在早期版本的 Databricks Runtime(在 Azure Databricks 平台 3.20 版发布到工作区之前的版本)上创建的,并且你现在将该群集编辑为使用 Databricks Runtime 7.0,则已配置为在所有群集上安装的任何库都将安装在该群集上。However, if you have a cluster that was created on an earlier version of Databricks Runtime before Azure Databricks platform version 3.20 was released to your workspace, and you now edit that cluster to use Databricks Runtime 7.0, any libraries that were configured to be installed on all clusters will be installed on that cluster. 在这种情况下,已安装的库中的任何不兼容的 JAR 都可能导致群集被禁用。In this case, any incompatible JARs in the installed libraries can cause the cluster to be disabled. 解决方法是克隆群集或创建新群集。The workaround is either to clone the cluster or to create a new cluster.
Apache Spark Apache Spark
Databricks Runtime 7.0 包括 Apache Spark 3.0。Databricks Runtime 7.0 includes Apache Spark 3.0.
本节内容:In this section:
- Core、Spark SQL、结构化流式处理Core, Spark SQL, Structured Streaming
- MLlibMLlib
- SparkRSparkR
- GraphXGraphX
- 弃用功能Deprecations
- 已知问题Known issues
Core、Spark SQL、结构化流式处理Core, Spark SQL, Structured Streaming
亮点Highlights
- (Hydrogen 项目)加速器感知型计划程序 (SPARK-24615)(Project Hydrogen) Accelerator-aware Scheduler (SPARK-24615)
- 自适应查询执行 (SPARK-31412)Adaptive Query Execution (SPARK-31412)
- 动态分区修剪 (SPARK-11150)Dynamic Partition Pruning (SPARK-11150)
- 重新设计的带类型提示的 pandas UDF API (SPARK-28264)Redesigned pandas UDF API with type hints (SPARK-28264)
- 结构化流式处理 UI (SPARK-29543)Structured Streaming UI (SPARK-29543)
- 目录插件 API (SPARK-31121)Catalog plugin API (SPARK-31121)
- 更好的 ANSI SQL 兼容性Better ANSI SQL compatibility
性能增强功能Performance enhancements
- 自适应查询执行 (SPARK-31412)Adaptive Query Execution (SPARK-31412)
- 基本框架 (SPARK-23128)Basic framework (SPARK-23128)
- 后期无序分区号调整 (SPARK-28177)Post shuffle partition number adjustment (SPARK-28177)
- 动态子查询重用 (SPARK-28753)Dynamic subquery reuse (SPARK-28753)
- 本地无序读取器 (SPARK-28560)Local shuffle reader (SPARK-28560)
- 倾斜联接优化 (SPARK-29544)Skew join optimization (SPARK-29544)
- 优化读取连续无序块的操作 (SPARK-9853)Optimize reading contiguous shuffle blocks (SPARK-9853)
- 动态分区修剪 (SPARK-11150)Dynamic Partition Pruning (SPARK-11150)
- 其他优化器规则Other optimizer rules
- ReuseSubquery 规则 (SPARK-27279)Rule ReuseSubquery (SPARK-27279)
- PushDownLeftSemiAntiJoin 规则 (SPARK-19712)Rule PushDownLeftSemiAntiJoin (SPARK-19712)
- PushLeftSemiLeftAntiThroughJoin 规则 (SPARK-19712)Rule PushLeftSemiLeftAntiThroughJoin (SPARK-19712)
- ReplaceNullWithFalse 规则 (SPARK-25860)Rule ReplaceNullWithFalse (SPARK-25860)
- 在联接/聚合的子查询中,Eliminate 规则在排序时不会受到限制 (SPARK-29343)Rule Eliminate sorts without limit in the subquery of Join/Aggregation (SPARK-29343)
- PruneHiveTablePartitions 规则 (SPARK-15616)Rule PruneHiveTablePartitions (SPARK-15616)
- 从“生成”中删除不必要的嵌套字段 (SPARK-27707)Pruning unnecessary nested fields from Generate (SPARK-27707)
- RewriteNonCorrelatedExists 规则 (SPARK-29800)Rule RewriteNonCorrelatedExists (SPARK-29800)
- 最小化表缓存同步成本 (SPARK-26917)、(SPARK-26617)、(SPARK-26548)Minimize table cache synchronization costs (SPARK-26917), (SPARK-26617), (SPARK-26548)
- 将聚合代码拆分为小函数 (SPARK-21870)Split aggregation code into small functions (SPARK-21870)
- 在 INSERT 和 ALTER TABLE ADD PARTITION 命令中添加批处理 (SPARK-29938)Add batching in INSERT and ALTER TABLE ADD PARTITION command (SPARK-29938)
扩展性增强功能Extensibility enhancements
- 目录插件 API (SPARK-31121)Catalog plugin API (SPARK-31121)
- 数据源 V2 API 重构 (SPARK-25390)Data source V2 API refactoring (SPARK-25390)
- Hive 3.0 和 3.1 元存储支持 (SPARK-27970)、(SPARK-24360)Hive 3.0 and 3.1 metastore support (SPARK-27970),(SPARK-24360)
- 将 Spark 插件接口扩展到驱动程序 (SPARK-29396)Extend Spark plugin interface to driver (SPARK-29396)
- 使用执行程序插件通过用户定义的指标扩展 Spark 指标系统 (SPARK-28091)Extend Spark metrics system with user-defined metrics using executor plugins (SPARK-28091)
- 用于提供扩展的列式处理支持的开发人员 API (SPARK-27396)Developer APIs for extended Columnar Processing Support (SPARK-27396)
- 使用 DSV2 的内置源迁移:parquet、ORC、CSV、JSON、Kafka、Text、Avro (SPARK-27589)Built-in source migration using DSV2: parquet, ORC, CSV, JSON, Kafka, Text, Avro (SPARK-27589)
- 允许 SparkExtensions 中的 FunctionInjection (SPARK-25560)Allow FunctionInjection in SparkExtensions (SPARK-25560)
- 允许聚合器注册为 UDAF (SPARK-27296)Allows Aggregator to be registered as a UDAF (SPARK-27296)
连接器增强功能Connector enhancements
- 通过非确定性表达式进行的列修剪 (SPARK-29768)Column pruning through nondeterministic expressions (SPARK-29768)
- 支持数据源表中的
spark.sql.statistics.fallBackToHdfs
(SPARK-25474)Supportspark.sql.statistics.fallBackToHdfs
in data source tables (SPARK-25474) - 允许在文件源上使用子查询筛选器进行分区修剪 (SPARK-26893)Allow partition pruning with subquery filters on file source (SPARK-26893)
- 避免在数据源筛选器中下推子查询 (SPARK-25482)Avoid pushdown of subqueries in data source filters (SPARK-25482)
- 从文件源加载递归数据 (SPARK-27990)Recursive data loading from file sources (SPARK-27990)
- Parquet/ORCParquet/ORC
- 下推析取谓词 (SPARK-27699)Pushdown of disjunctive predicates (SPARK-27699)
- 通用化嵌套列修剪 (SPARK-25603) 并在默认情况下启用 (SPARK-29805)Generalize Nested Column Pruning (SPARK-25603) and turned on by default (SPARK-29805)
- 仅 ParquetParquet only
- 嵌套字段的 Parquet 谓词下推 (SPARK-17636)Parquet predicate pushdown for nested fields (SPARK-17636)
- 仅 ORCORC only
- 支持 ORC 的合并架构 (SPARK-11412)Support merge schema for ORC (SPARK-11412)
- ORC 的嵌套架构修剪 (SPARK-27034)Nested schema pruning for ORC (SPARK-27034)
- ORC 的谓词转换复杂度降低(SPARK-27105、SPARK-28108)Predicate conversion complexity reduction for ORC (SPARK-27105, SPARK-28108)
- 将 Apache ORC 升级到 1.5.9 (SPARK-30695)Upgrade Apache ORC to 1.5.9 (SPARK-30695)
- CSVCSV
- 支持在 CSV 数据源中向下推送筛选器 (SPARK-30323)Support filters pushdown in CSV datasource (SPARK-30323)
- Hive SerDeHive SerDe
- 在读取使用原生数据源的 Hive serde 表时不进行架构推理 (SPARK-27119)No schema inference when reading Hive serde table with native data source (SPARK-27119)
- Hive CTAS 命令应使用数据源(如果数据源是可转换的)(SPARK-25271)Hive CTAS commands should use data source if it is convertible (SPARK-25271)
- 使用原生数据源优化插入已分区 Hive 表的操作 (SPARK-28573)Use native data source to optimize inserting partitioned Hive table (SPARK-28573)
- Apache KafkaApache Kafka
- 添加对 Kafka 标头的支持 (SPARK-23539)Add support for Kafka headers (SPARK-23539)
- 添加 Kafka 委派令牌支持 (SPARK-25501)Add Kafka delegation token support (SPARK-25501)
- 引入 Kafka 源的新选项:按时间戳进行偏移(开始/结束)(SPARK-26848)Introduce new option to Kafka source: offset by timestamp (starting/ending) (SPARK-26848)
- 在 Kafka 批处理源和流式处理源 v1 中支持
minPartitions
选项 (SPARK-30656)Support theminPartitions
option in Kafka batch source and streaming source v1 (SPARK-30656) - 将 Kafka 升级到 2.4.1 (SPARK-31126)Upgrade Kafka to 2.4.1 (SPARK-31126)
- 新的内置数据源New built-in data sources
- 新的内置二进制文件数据源 (SPARK-25348)New built-in binary file data sources (SPARK-25348)
- 新的无操作批处理数据源 (SPARK-26550) 和无操作流式处理接收器 (SPARK-26649)New no-op batch data sources (SPARK-26550) and no-op streaming sink (SPARK-26649)
功能增强Feature enhancements
- [Hydrogen] 加速器感知型计划程序 (SPARK-24615)[Hydrogen] Accelerator-aware Scheduler (SPARK-24615)
- 引入一组完整的联接提示 (SPARK-27225)Introduce a complete set of Join Hints (SPARK-27225)
- 为 SQL 查询添加
PARTITION BY
提示 (SPARK-28746)AddPARTITION BY
hint for SQL queries (SPARK-28746) - Thrift 服务器中的元数据处理 (SPARK-28426)Metadata Handling in Thrift Server (SPARK-28426)
- 将更高阶的函数添加到 scala API (SPARK-27297)Add higher order functions to scala API (SPARK-27297)
- 在屏障任务上下文中支持简单的全部收集 (SPARK-30667)Support simple all gather in barrier task context (SPARK-30667)
- Hive UDF 支持 UDT 类型 (SPARK-28158)Hive UDFs supports the UDT type (SPARK-28158)
- 支持 Catalyst 中的 DELETE/UPDATE/MERGE 运算符(SPARK-28351、SPARK-28892、SPARK-28893)Support DELETE/UPDATE/MERGE Operators in Catalyst (SPARK-28351, SPARK-28892, SPARK-28893)
- 实现 DataFrame.tail (SPARK-30185)Implement DataFrame.tail (SPARK-30185)
- 新的内置函数New built-in functions
- sinh、cosh、tanh、asinh、acosh、atanh (SPARK-28133)sinh, cosh, tanh, asinh, acosh, atanh (SPARK-28133)
- any、every、some (SPARK-19851)any, every, some (SPARK-19851)
- bit_and、bit_or (SPARK-27879)bit_and, bit_or (SPARK-27879)
- bit_count (SPARK-29491)bit_count (SPARK-29491)
- bit_xor (SPARK-29545)bit_xor (SPARK-29545)
- bool_and、bool_or (SPARK-30184)bool_and, bool_or (SPARK-30184)
- count_if (SPARK-27425)count_if (SPARK-27425)
- date_part (SPARK-28690)date_part (SPARK-28690)
- extract (SPARK-23903)extract (SPARK-23903)
- forall (SPARK-27905)forall (SPARK-27905)
- from_csv (SPARK-25393)from_csv (SPARK-25393)
- make_date (SPARK-28432)make_date (SPARK-28432)
- make_interval (SPARK-29393)make_interval (SPARK-29393)
- make_timestamp (SPARK-28459)make_timestamp (SPARK-28459)
- map_entries (SPARK-23935)map_entries (SPARK-23935)
- map_filter (SPARK-23937)map_filter (SPARK-23937)
- map_zip_with (SPARK-23938)map_zip_with (SPARK-23938)
- max_by、min_by (SPARK-27653)max_by, min_by (SPARK-27653)
- schema_of_csv (SPARK-25672)schema_of_csv (SPARK-25672)
- to_csv (SPARK-25638)to_csv (SPARK-25638)
- transform_keys (SPARK-23939)transform_keys (SPARK-23939)
- transform_values (SPARK-23940)transform_values (SPARK-23940)
- typeof (SPARK-29961)typeof (SPARK-29961)
- version (SPARK-29554)version (SPARK-29554)
- xxhash64 (SPARK-27099)xxhash64 (SPARK-27099)
- 对现有内置函数的改进Improvements on existing built-in functions
- 内置日期-时间函数/操作改进 (SPARK-31415)Built-in date-time functions/operations improvement (SPARK-31415)
- 对
from_json
支持FAILFAST
模式 (SPARK-25243)SupportFAILFAST
mode forfrom_json
(SPARK-25243) array_sort
添加新的比较运算符参数 (SPARK-29020)array_sort
adds a new comparator parameter (SPARK-29020)- 筛选器现在可以将索引用作输入和元素 (SPARK-28962)Filter can now take the index as input as well as the element (SPARK-28962)
SQL 兼容性增强功能SQL compatibility enhancements
- 切换到前公历 (SPARK-26651)Switch to Proleptic Gregorian calendar (SPARK-26651)
- 生成 Spark 自己的日期/时间模式定义 (SPARK-31408)Build Spark’s own datetime pattern definition (SPARK-31408)
- 引入用于表插入的 ANSI 存储分配策略 (SPARK-28495)Introduce ANSI store assignment policy for table insertion (SPARK-28495)
- 默认情况下,在表插入中遵循 ANSI 存储分配规则 (SPARK-28885)Follow ANSI store assignment rule in table insertion by default (SPARK-28885)
- 添加 SQLConf
spark.sql.ansi.enabled
(SPARK-28989)Add a SQLConfspark.sql.ansi.enabled
(SPARK-28989) - 支持用于聚合表达式的 ANSI SQL 筛选子句 (SPARK-27986)Support ANSI SQL filter clause for aggregate expression (SPARK-27986)
- 支持 ANSI SQL
OVERLAY
函数 (SPARK-28077)Support ANSI SQLOVERLAY
function (SPARK-28077) - 支持 ANSI 嵌套的带括号注释 (SPARK-28880)Support ANSI nested bracketed comments (SPARK-28880)
- 整数溢出时引发异常 (SPARK-26218)Throw exception on overflow for integers (SPARK-26218)
- 区间算术运算的溢出检查 (SPARK-30341)Overflow check for interval arithmetic operations (SPARK-30341)
- 将无效字符串强制转换为数值类型时引发异常 (SPARK-30292)Throw Exception when invalid string is cast to numeric type (SPARK-30292)
- 使区间乘法和除法的溢出行为与其他运算一致 (SPARK-30919)Make interval multiply and divide’s overflow behavior consistent with other operations (SPARK-30919)
- 为 char 和 decimal 添加 ANSI 类型的别名 (SPARK-29941)Add ANSI type aliases for char and decimal (SPARK-29941)
- SQL 分析程序定义符合 ANSI 标准的保留关键字 (SPARK-26215)SQL Parser defines ANSI compliant reserved keywords (SPARK-26215)
- 当 ANSI 模式处于开启状态时,禁止将保留关键字用作标识符 (SPARK-26976)Forbid reserved keywords as identifiers when ANSI mode is on (SPARK-26976)
- 支持 ANSI SQL
LIKE ... ESCAPE
语法 (SPARK-28083)Support ANSI SQLLIKE ... ESCAPE
syntax (SPARK-28083) - 支持 ANSI SQL 布尔值-谓词语法 (SPARK-27924)Support ANSI SQL Boolean-Predicate syntax (SPARK-27924)
- 更好地支持相关子查询处理 (SPARK-18455)Better support for correlated subquery processing (SPARK-18455)
监视和可调试性增强功能Monitoring and debugability enhancements
- 新结构化流式处理 UI (SPARK-29543)New Structured Streaming UI (SPARK-29543)
- SHS:允许正在运行的流式处理应用的事件日志滚动更新 (SPARK-28594)SHS: Allow event logs for running streaming apps to be rolled over (SPARK-28594)
- 添加一个 API,以便用户定义和观察批处理和流式处理查询的任意指标 (SPARK-29345)Add an API that allows a user to define and observe arbitrary metrics on batch and streaming queries (SPARK-29345)
- 用于跟踪每个查询的计划时间的检测 (SPARK-26129)Instrumentation for tracking per-query planning time (SPARK-26129)
- 将基本的无序指标置于 SQL 交换运算符中 (SPARK-26139)Put the basic shuffle metrics in the SQL exchange operator (SPARK-26139)
- SQL 语句显示在 SQL 选项卡中,而不是 callsite 中 (SPARK-27045)SQL statement is shown in SQL Tab instead of callsite (SPARK-27045)
- 向 SparkUI 添加工具提示 (SPARK-29449)Add tooltip to SparkUI (SPARK-29449)
- 提高历史记录服务器的并发性能 (SPARK-29043)Improve the concurrent performance of History Server (SPARK-29043)
EXPLAIN FORMATTED
命令 (SPARK-27395)EXPLAIN FORMATTED
command (SPARK-27395)- 支持将截断的计划和生成的代码转储到文件 (SPARK-26023)Support Dumping truncated plans and generated code to a file (SPARK-26023)
- 增强用于描述查询输出的框架 (SPARK-26982)Enhance describe framework to describe the output of a query (SPARK-26982)
- 添加
SHOW VIEWS
命令 (SPARK-31113)AddSHOW VIEWS
command (SPARK-31113) - 改进 SQL 分析程序的错误消息 (SPARK-27901)Improve the error messages of SQL parser (SPARK-27901)
- 支持以原生方式进行 Prometheus 监视 (SPARK-29429)Support Prometheus monitoring natively (SPARK-29429)
PySpark 增强功能PySpark enhancements
- 重新设计的带类型提示的 pandas UDF (SPARK-28264)Redesigned pandas UDFs with type hints (SPARK-28264)
- Pandas UDF 管道 (SPARK-26412)Pandas UDF pipeline (SPARK-26412)
- 支持将 StructType 作为标量 Pandas UDF 的参数和返回类型 (SPARK-27240)Support StructType as arguments and return types for Scalar Pandas UDF (SPARK-27240 )
- 通过 Pandas UDF 支持数据帧协同组 (SPARK-27463)Support Dataframe Cogroup via Pandas UDFs (SPARK-27463)
- 添加
mapInPandas
以允许数据帧的迭代器 (SPARK-28198)AddmapInPandas
to allow an iterator of DataFrames (SPARK-28198) - 某些 SQL 函数也应采用列名 (SPARK-26979)Certain SQL functions should take column names as well (SPARK-26979)
- 使 PySpark SQL 异常更具 Python 特性 (SPARK-31849)Make PySpark SQL exceptions more Pythonic (SPARK-31849)
文档和测试覆盖范围增强功能Documentation and test coverage enhancements
- 构建 SQL 参考 (SPARK-28588)Build a SQL Reference (SPARK-28588)
- 构建 WebUI 的用户指南 (SPARK-28372)Build a user guide for WebUI (SPARK-28372)
- 构建 SQL 配置文档的页面 (SPARK-30510)Build a page for SQL configuration documentation (SPARK-30510)
- 添加 Spark 配置的版本信息 (SPARK-30839)Add version information for Spark configuration (SPARK-30839)
- 从 PostgreSQL 移植回归测试 (SPARK-27763)Port regression tests from PostgreSQL (SPARK-27763)
- Thrift 服务器测试覆盖范围 (SPARK-28608)Thrift-server test coverage (SPARK-28608)
- UDF(python UDF、pandas UDF、scala UDF)的测试覆盖范围 (SPARK-27921)Test coverage of UDFs (python UDF, pandas UDF, scala UDF) (SPARK-27921)
其他值得注意的更改Other notable changes
- 内置 Hive 执行从 1.2.1 升级到 2.3.6(SPARK-23710、SPARK-28723、SPARK-31381)Built-in Hive execution upgrade from 1.2.1 to 2.3.6 (SPARK-23710, SPARK-28723, SPARK-31381)
- 默认情况下,使用 Apache Hive 2.3 依赖项 (SPARK-30034)Use Apache Hive 2.3 dependency by default (SPARK-30034)
- 正式发布 Scala 2.12,删除了 2.11 (SPARK-26132)GA Scala 2.12 and remove 2.11 (SPARK-26132)
- 改进动态分配中让执行程序超时的逻辑 (SPARK-20286)Improve logic for timing out executors in dynamic allocation (SPARK-20286)
- 磁盘持久保存的 RDD 块由无序处理服务提供,在进行动态分配时会被忽略 (SPARK-27677)Disk-persisted RDD blocks served by shuffle service and ignored for Dynamic Allocation (SPARK-27677)
- 获取新的执行程序,避免因加入阻止列表而挂起 (SPARK-22148)Acquire new executors to avoid hang because of blacklisting (SPARK-22148)
- 允许共享 Netty 的内存池分配器 (SPARK-24920)Allow sharing of Netty’s memory pool allocators (SPARK-24920)
- 修复
TaskMemoryManager
和UnsafeExternalSorter$SpillableIterator
之间的死锁 (SPARK-27338)Fix deadlock betweenTaskMemoryManager
andUnsafeExternalSorter$SpillableIterator
(SPARK-27338) - 引入用于 StructuredStreaming 的
AdmissionControl
API (SPARK-30669)IntroduceAdmissionControl
APIs for StructuredStreaming (SPARK-30669) - Spark 历史记录主页性能改善 (SPARK-25973)Spark History Main page performance improvement (SPARK-25973)
- 提高 SQL 侦听器中的指标聚合速度,并降低该指标聚合所占用的内存 (SPARK-29562)Speed up and slim down metric aggregation in SQL listener (SPARK-29562)
- 从同一主机提取无序块时避免使用网络 (SPARK-27651)Avoid the network when shuffle blocks are fetched from the same host (SPARK-27651)
- 改进
DistributedFileSystem
的文件列表操作 (SPARK-27801)Improve file listing forDistributedFileSystem
(SPARK-27801)
Spark Core、Spark SQL 和结构化流式处理的行为变更Behavior changes for Spark core, Spark SQL, and Structured Streaming
以下迁移指南列出了 Apache Spark 2.4 和 3.0 之间的行为变更。The following migration guides list behavior changes between Apache Spark 2.4 and 3.0. 这些变更可能会要求更新一直在较低 Databricks Runtime 版本上运行的作业:These changes may require updates to jobs that you have been running on lower Databricks Runtime versions:
- 迁移指南:Spark CoreMigration Guide: Spark Core
- 迁移指南:SQL、数据集和数据帧Migration Guide: SQL, Datasets and DataFrame
- 迁移指南:结构化流式处理Migration Guide: Structured Streaming
- 迁移指南:PySpark(Spark 上的 Python)Migration Guide: PySpark (Python on Spark)
这些迁移指南不涵盖以下行为变更:The following behavior changes are not covered in these migration guides:
- 在 Spark 3.0 中,已删除弃用的类
org.apache.spark.sql.streaming.ProcessingTime
。In Spark 3.0, the deprecated classorg.apache.spark.sql.streaming.ProcessingTime
has been removed. 请改用org.apache.spark.sql.streaming.Trigger.ProcessingTime
。Useorg.apache.spark.sql.streaming.Trigger.ProcessingTime
instead. 同样,删除org.apache.spark.sql.execution.streaming.continuous.ContinuousTrigger
是为了支持Trigger.Continuous
,隐藏org.apache.spark.sql.execution.streaming.OneTimeTrigger
是为了支持Trigger.Once
。Likewise,org.apache.spark.sql.execution.streaming.continuous.ContinuousTrigger
has been removed in favor ofTrigger.Continuous
, andorg.apache.spark.sql.execution.streaming.OneTimeTrigger
has been hidden in favor ofTrigger.Once
. (SPARK-28199)(SPARK-28199) - 在 Databricks Runtime 7.0 中读取 Hive SerDe 表时,默认情况下 Spark 不允许读取并非表分区的子目录下的文件。In Databricks Runtime 7.0, when reading a Hive SerDe table, by default Spark disallows reading files under a subdirectory that is not a table partition. 若要允许它,请将配置
spark.databricks.io.hive.scanNonpartitionedDirectory.enabled
设置为true
。To enable it, set the configurationspark.databricks.io.hive.scanNonpartitionedDirectory.enabled
astrue
. 这不影响 Spark 原生表读取器和文件读取器。This does not affect Spark native table readers and file readers.
编程指南:Programming guides:
- Spark RDD 编程指南Spark RDD Programming Guide
- Spark SQL、数据帧和数据集指南Spark SQL, DataFrames and Datasets Guide
- 结构化流式处理编程指南。Structured Streaming Programming Guide.
MLlibMLlib
亮点Highlights
- 已将多列支持添加到 Binarizer (SPARK-23578)、StringIndexer (SPARK-11215)、StopWordsRemover (SPARK-29808) 和 PySpark QuantileDiscretizer (SPARK-22796)Multiple columns support was added to Binarizer (SPARK-23578), StringIndexer (SPARK-11215), StopWordsRemover (SPARK-29808) and PySpark QuantileDiscretizer (SPARK-22796)
- 支持基于树的功能转换 (SPARK-13677)Support tree-based feature transformation(SPARK-13677)
- 添加了两个新计算器:MultilabelClassificationEvaluator (SPARK-16692) 和 RankingEvaluator (SPARK-28045)Two new evaluators MultilabelClassificationEvaluator (SPARK-16692) and RankingEvaluator (SPARK-28045) were added
- 已将示例权重支持添加到 DecisionTreeClassifier/Regressor (SPARK-19591)、RandomForestClassifier/Regressor (SPARK-9478)、GBTClassifier/Regressor (SPARK-9612)、RegressionEvaluator (SPARK-24102)、BinaryClassificationEvaluator (SPARK-24103)、BisectingKMeans (SPARK-30351)、KMeans (SPARK-29967) 和 GaussianMixture (SPARK-30102)Sample weights support was added in DecisionTreeClassifier/Regressor (SPARK-19591), RandomForestClassifier/Regressor (SPARK-9478), GBTClassifier/Regressor (SPARK-9612), RegressionEvaluator (SPARK-24102), BinaryClassificationEvaluator (SPARK-24103), BisectingKMeans (SPARK-30351), KMeans (SPARK-29967) and GaussianMixture (SPARK-30102)
- 添加了适用于 PowerIterationClustering 的 R API (SPARK-19827)R API for PowerIterationClustering was added (SPARK-19827)
- 添加了 Spark ML 侦听器,用于跟踪 ML 管道状态 (SPARK-23674)Added Spark ML listener for tracking ML pipeline status (SPARK-23674)
- 已将带验证集的拟合添加到 Python 中的梯度提升树 (SPARK-24333)Fit with validation set was added to Gradient Boosted Trees in Python (SPARK-24333)
- 已添加 RobustScaler 转换器 (SPARK-28399)RobustScaler transformer was added (SPARK-28399)
- 已添加因式分解计算机分类器和回归量 (SPARK-29224)Factorization Machines classifier and regressor were added (SPARK-29224)
- 已添加高斯朴素贝叶斯 (SPARK-16872) 和补足朴素贝叶斯 (SPARK-29942)Gaussian Naive Bayes (SPARK-16872) and Complement Naive Bayes (SPARK-29942) were added
- Scala 和 Python 之间的 ML 函数奇偶校验 (SPARK-28958)ML function parity between Scala and Python (SPARK-28958)
- predictRaw 在所有分类模型中都是公共的。predictRaw is made public in all the Classification models. predictProbability 在除了 LinearSVCModel 之外的所有分类模型中都是公共的 (SPARK-30358)predictProbability is made public in all of the Classification models except LinearSVCModel (SPARK-30358)
MLlib 的行为变更Behavior changes for MLlib
以下迁移指南列出了 Apache Spark 2.4 和 3.0 之间的行为变更。The following migration guide lists behavior changes between Apache Spark 2.4 and 3.0. 这些变更可能会要求更新一直在较低 Databricks Runtime 版本上运行的作业:These changes may require updates to jobs that you have been running on lower Databricks Runtime versions:
此迁移指南不涵盖以下行为变更:The following behavior changes are not covered in the migration guide:
- 在 Spark 3.0 中,Pyspark 中的多类逻辑回归现在会(正确地)返回
LogisticRegressionSummary
而不是子类BinaryLogisticRegressionSummary
。In Spark 3.0, a multiclass logistic regression in Pyspark will now (correctly) returnLogisticRegressionSummary
, not the subclassBinaryLogisticRegressionSummary
. 在这种情况下,BinaryLogisticRegressionSummary
公开的其他方法仍不起作用。The additional methods exposed byBinaryLogisticRegressionSummary
would not work in this case anyway. (SPARK-31681)(SPARK-31681) - 在 Spark 3.0 中,
pyspark.ml.param.shared.Has*
mixin 不再提供任何set*(self, value)
资源库方法,请改用相应的self.set(self.*, value)
。In Spark 3.0,pyspark.ml.param.shared.Has*
mixins do not provide anyset*(self, value)
setter methods anymore, use the respectiveself.set(self.*, value)
instead. 有关详细信息,请参阅 SPARK-29093。See SPARK-29093 for details. (SPARK-29093)(SPARK-29093)
编程指南Programming guide
SparkRSparkR
- SparkR 的互操作性方面的 Arrow 优化 (SPARK-26759)Arrow optimization in SparkR’s interoperability (SPARK-26759)
- 通过向量化的 R gapply()、dapply()、createDataFrame、collect() 进行的性能增强Performance enhancement via vectorized R gapply(), dapply(), createDataFrame, collect()
- “预先执行”,适用于 R shell、IDE (SPARK-24572)“Eager execution” for R shell, IDE (SPARK-24572)
- 适用于 Power 迭代聚类分析的 R API (SPARK-19827)R API for Power Iteration Clustering (SPARK-19827)
SparkR 的行为变更Behavior changes for SparkR
以下迁移指南列出了 Apache Spark 2.4 和 3.0 之间的行为变更。The following migration guide lists behavior changes between Apache Spark 2.4 and 3.0. 这些变更可能会要求更新一直在较低 Databricks Runtime 版本上运行的作业:These changes may require updates to jobs that you have been running on lower Databricks Runtime versions:
编程指南Programming guide
GraphXGraphX
编程指南:GraphX 编程指南。Programming guide: GraphX Programming Guide.
弃用功能Deprecations
- 弃用 Python 2 支持 (SPARK-27884)Deprecate Python 2 support (SPARK-27884)
- 弃用 R < 3.4 的支持 (SPARK-26014)Deprecate R < 3.4 support (SPARK-26014)
已知问题Known issues
- 如果 year 字段缺失,则使用模式字母“D”分析年份中的天会返回错误的结果。Parsing day of year using pattern letter ‘D’ returns the wrong result if the year field is missing. 这种情况可能发生在
to_timestamp
等 SQL 函数中,它使用模式字符串将日期/时间字符串分析为日期/时间值。This can happen in SQL functions liketo_timestamp
which parses datetime string to datetime values using a pattern string. (SPARK-31939)(SPARK-31939) - 如果键的值为 -0.0 和 0.0,则在子查询中进行联接/窗口/聚合操作可能会导致错误的结果。Join/Window/Aggregate inside subqueries may lead to wrong results if the keys have values -0.0 and 0.0. (SPARK-31958)(SPARK-31958)
- 窗口查询可能会由于意外的歧义自联接错误而失败。A window query may fail with ambiguous self-join error unexpectedly. (SPARK-31956)(SPARK-31956)
- 使用
dropDuplicates
运算符的流式处理查询可能无法使用通过 Spark 2.x 编写的检查点重启。Streaming queries withdropDuplicates
operator may not be able to restart with the checkpoint written by Spark 2.x. (SPARK-31990)(SPARK-31990)
维护更新Maintenance updates
请参阅 Databricks Runtime 7.0 维护更新。See Databricks Runtime 7.0 maintenance updates.
系统环境System environment
- 操作系统:Ubuntu 18.04.4 LTSOperating System: Ubuntu 18.04.4 LTS
- Java:1.8.0_252Java: 1.8.0_252
- Scala:2.12.10Scala: 2.12.10
- Python:3.7.5Python: 3.7.5
- R:R 版本 3.6.3 (2020-02-29)R: R version 3.6.3 (2020-02-29)
- Delta Lake 0.7.0Delta Lake 0.7.0
已安装的 Python 库Installed Python libraries
库Library | 版本Version | 库Library | 版本Version | 库Library | 版本Version |
---|---|---|---|---|---|
asn1cryptoasn1crypto | 1.3.01.3.0 | backcallbackcall | 0.1.00.1.0 | boto3boto3 | 1.12.01.12.0 |
botocorebotocore | 1.15.01.15.0 | certificertifi | 2020.4.52020.4.5 | cfficffi | 1.14.01.14.0 |
chardetchardet | 3.0.43.0.4 | 密码系统cryptography | 2.82.8 | cyclercycler | 0.10.00.10.0 |
CythonCython | 0.29.150.29.15 | decoratordecorator | 4.4.14.4.1 | docutilsdocutils | 0.15.20.15.2 |
entrypointsentrypoints | 0.30.3 | idnaidna | 2.82.8 | ipykernelipykernel | 5.1.45.1.4 |
ipythonipython | 7.12.07.12.0 | ipython-genutilsipython-genutils | 0.2.00.2.0 | jedijedi | 0.14.10.14.1 |
jmespathjmespath | 0.9.40.9.4 | joblibjoblib | 0.14.10.14.1 | jupyter-clientjupyter-client | 5.3.45.3.4 |
jupyter-corejupyter-core | 4.6.14.6.1 | kiwisolverkiwisolver | 1.1.01.1.0 | matplotlibmatplotlib | 3.1.33.1.3 |
numpynumpy | 1.18.11.18.1 | pandaspandas | 1.0.11.0.1 | parsoparso | 0.5.20.5.2 |
patsypatsy | 0.5.10.5.1 | pexpectpexpect | 4.8.04.8.0 | picklesharepickleshare | 0.7.50.7.5 |
pippip | 20.0.220.0.2 | prompt-toolkitprompt-toolkit | 3.0.33.0.3 | psycopg2psycopg2 | 2.8.42.8.4 |
ptyprocessptyprocess | 0.6.00.6.0 | pyarrowpyarrow | 0.15.10.15.1 | pycparserpycparser | 2.192.19 |
PygmentsPygments | 2.5.22.5.2 | PyGObjectPyGObject | 3.26.13.26.1 | pyOpenSSLpyOpenSSL | 19.1.019.1.0 |
pyparsingpyparsing | 2.4.62.4.6 | PySocksPySocks | 1.7.11.7.1 | python-aptpython-apt | 1.6.5+ubuntu0.31.6.5+ubuntu0.3 |
python-dateutilpython-dateutil | 2.8.12.8.1 | pytzpytz | 2019.32019.3 | pyzmqpyzmq | 18.1.118.1.1 |
请求requests | 2.22.02.22.0 | s3transfers3transfer | 0.3.30.3.3 | scikit-learnscikit-learn | 0.22.10.22.1 |
scipyscipy | 1.4.11.4.1 | seabornseaborn | 0.10.00.10.0 | setuptoolssetuptools | 45.2.045.2.0 |
6six | 1.14.01.14.0 | ssh-import-idssh-import-id | 5.75.7 | statsmodelsstatsmodels | 0.11.00.11.0 |
tornadotornado | 6.0.36.0.3 | traitletstraitlets | 4.3.34.3.3 | unattended-upgradesunattended-upgrades | 0.10.1 |
urllib3urllib3 | 1.25.81.25.8 | virtualenvvirtualenv | 16.7.1016.7.10 | wcwidthwcwidth | 0.1.80.1.8 |
wheelwheel | 0.34.20.34.2 |
已安装的 R 库 Installed R libraries
R 库安装自(2020-04-22 的 Microsoft CRAN 快照)。R libraries are installed from (Microsoft CRAN snapshot on 2020-04-22).
库Library | 版本Version | 库Library | 版本Version | 库Library | 版本Version |
---|---|---|---|---|---|
askpassaskpass | 1.11.1 | assertthatassertthat | 0.2.10.2.1 | backportsbackports | 1.1.61.1.6 |
basebase | 3.6.33.6.3 | base64encbase64enc | 0.1-30.1-3 | BHBH | 1.72.0-31.72.0-3 |
bitbit | 1.1-15.21.1-15.2 | bit64bit64 | 0.9-70.9-7 | blobblob | 1.2.11.2.1 |
启动boot | 1.3-251.3-25 | brewbrew | 1.0-61.0-6 | broombroom | 0.5.60.5.6 |
callrcallr | 3.4.33.4.3 | caretcaret | 6.0-866.0-86 | cellrangercellranger | 1.1.01.1.0 |
chronchron | 2.3-552.3-55 | classclass | 7.3-177.3-17 | clicli | 2.0.22.0.2 |
cliprclipr | 0.7.00.7.0 | clustercluster | 2.1.02.1.0 | codetoolscodetools | 0.2-160.2-16 |
colorspacecolorspace | 1.4-11.4-1 | commonmarkcommonmark | 1.71.7 | compilercompiler | 3.6.33.6.3 |
configconfig | 0.30.3 | covrcovr | 3.5.03.5.0 | crayoncrayon | 1.3.41.3.4 |
crosstalkcrosstalk | 1.1.0.11.1.0.1 | curlcurl | 4.34.3 | data.tabledata.table | 1.12.81.12.8 |
datasetsdatasets | 3.6.33.6.3 | DBIDBI | 1.1.01.1.0 | dbplyrdbplyr | 1.4.31.4.3 |
descdesc | 1.2.01.2.0 | devtoolsdevtools | 2.3.02.3.0 | digestdigest | 0.6.250.6.25 |
dplyrdplyr | 0.8.50.8.5 | DTDT | 0.130.13 | ellipsisellipsis | 0.3.00.3.0 |
评估evaluate | 0.140.14 | fansifansi | 0.4.10.4.1 | farverfarver | 2.0.32.0.3 |
fastmapfastmap | 1.0.11.0.1 | forcatsforcats | 0.5.00.5.0 | foreachforeach | 1.5.01.5.0 |
foreignforeign | 0.8-760.8-76 | forgeforge | 0.2.00.2.0 | fsfs | 1.4.11.4.1 |
genericsgenerics | 0.0.20.0.2 | ggplot2ggplot2 | 3.3.03.3.0 | ghgh | 1.1.01.1.0 |
git2rgit2r | 0.26.10.26.1 | glmnetglmnet | 3.0-23.0-2 | globalsglobals | 0.12.50.12.5 |
glueglue | 1.4.01.4.0 | gowergower | 0.2.10.2.1 | graphicsgraphics | 3.6.33.6.3 |
grDevicesgrDevices | 3.6.33.6.3 | gridgrid | 3.6.33.6.3 | gridExtragridExtra | 2.32.3 |
gsubfngsubfn | 0.70.7 | gtablegtable | 0.3.00.3.0 | havenhaven | 2.2.02.2.0 |
highrhighr | 0.80.8 | hmshms | 0.5.30.5.3 | htmltoolshtmltools | 0.4.00.4.0 |
htmlwidgetshtmlwidgets | 1.5.11.5.1 | httpuvhttpuv | 1.5.21.5.2 | httrhttr | 1.4.11.4.1 |
hwriterhwriter | 1.3.21.3.2 | hwriterPlushwriterPlus | 1.0-31.0-3 | iniini | 0.3.10.3.1 |
ipredipred | 0.9-90.9-9 | isobandisoband | 0.2.10.2.1 | iteratorsiterators | 1.0.121.0.12 |
jsonlitejsonlite | 1.6.11.6.1 | KernSmoothKernSmooth | 2.23-172.23-17 | knitrknitr | 1.281.28 |
labelinglabeling | 0.30.3 | laterlater | 1.0.01.0.0 | latticelattice | 0.20-410.20-41 |
lavalava | 1.6.71.6.7 | lazyevallazyeval | 0.2.20.2.2 | lifecyclelifecycle | 0.2.00.2.0 |
lubridatelubridate | 1.7.81.7.8 | magrittrmagrittr | 1.51.5 | markdownmarkdown | 1.11.1 |
MASSMASS | 7.3-51.67.3-51.6 | 矩阵Matrix | 1.2-181.2-18 | memoisememoise | 1.1.01.1.0 |
方法methods | 3.6.33.6.3 | mgcvmgcv | 1.8-311.8-31 | mimemime | 0.90.9 |
ModelMetricsModelMetrics | 1.2.2.21.2.2.2 | modelrmodelr | 0.1.60.1.6 | munsellmunsell | 0.5.00.5.0 |
nlmenlme | 3.1-1473.1-147 | nnetnnet | 7.3-147.3-14 | numDerivnumDeriv | 2016.8-1.12016.8-1.1 |
opensslopenssl | 1.4.11.4.1 | parallelparallel | 3.6.33.6.3 | pillarpillar | 1.4.31.4.3 |
pkgbuildpkgbuild | 1.0.61.0.6 | pkgconfigpkgconfig | 2.0.32.0.3 | pkgloadpkgload | 1.0.21.0.2 |
plogrplogr | 0.2.00.2.0 | plyrplyr | 1.8.61.8.6 | praisepraise | 1.0.01.0.0 |
prettyunitsprettyunits | 1.1.11.1.1 | pROCpROC | 1.16.21.16.2 | processxprocessx | 3.4.23.4.2 |
prodlimprodlim | 2019.11.132019.11.13 | 进度progress | 1.2.21.2.2 | promisespromises | 1.1.01.1.0 |
protoproto | 1.0.01.0.0 | psps | 1.3.21.3.2 | purrrpurrr | 0.3.40.3.4 |
r2d3r2d3 | 0.2.30.2.3 | R6R6 | 2.4.12.4.1 | randomForestrandomForest | 4.6-144.6-14 |
rappdirsrappdirs | 0.3.10.3.1 | rcmdcheckrcmdcheck | 1.3.31.3.3 | RColorBrewerRColorBrewer | 1.1-21.1-2 |
RcppRcpp | 1.0.4.61.0.4.6 | readrreadr | 1.3.11.3.1 | readxlreadxl | 1.3.11.3.1 |
recipesrecipes | 0.1.100.1.10 | rematchrematch | 1.0.11.0.1 | rematch2rematch2 | 2.1.12.1.1 |
remotesremotes | 2.1.12.1.1 | reprexreprex | 0.3.00.3.0 | reshape2reshape2 | 1.4.41.4.4 |
rexrex | 1.2.01.2.0 | rjsonrjson | 0.2.200.2.20 | rlangrlang | 0.4.50.4.5 |
rmarkdownrmarkdown | 2.12.1 | RODBCRODBC | 1.3-161.3-16 | roxygen2roxygen2 | 7.1.07.1.0 |
rpartrpart | 4.1-154.1-15 | rprojrootrprojroot | 1.3-21.3-2 | RserveRserve | 1.8-61.8-6 |
RSQLiteRSQLite | 2.2.02.2.0 | rstudioapirstudioapi | 0.110.11 | rversionsrversions | 2.0.12.0.1 |
rvestrvest | 0.3.50.3.5 | scalesscales | 1.1.01.1.0 | selectrselectr | 0.4-20.4-2 |
sessioninfosessioninfo | 1.1.11.1.1 | shapeshape | 1.4.41.4.4 | shinyshiny | 1.4.0.21.4.0.2 |
sourcetoolssourcetools | 0.1.70.1.7 | sparklyrsparklyr | 1.2.01.2.0 | SparkRSparkR | 3.0.03.0.0 |
spatialspatial | 7.3-117.3-11 | splinessplines | 3.6.33.6.3 | sqldfsqldf | 0.4-110.4-11 |
SQUAREMSQUAREM | 2020.22020.2 | statsstats | 3.6.33.6.3 | stats4stats4 | 3.6.33.6.3 |
stringistringi | 1.4.61.4.6 | stringrstringr | 1.4.01.4.0 | survivalsurvival | 3.1-123.1-12 |
syssys | 3.33.3 | tcltktcltk | 3.6.33.6.3 | TeachingDemosTeachingDemos | 2.102.10 |
testthattestthat | 2.3.22.3.2 | tibbletibble | 3.0.13.0.1 | tidyrtidyr | 1.0.21.0.2 |
tidyselecttidyselect | 1.0.01.0.0 | tidyversetidyverse | 1.3.01.3.0 | timeDatetimeDate | 3043.1023043.102 |
tinytextinytex | 0.220.22 | 工具tools | 3.6.33.6.3 | usethisusethis | 1.6.01.6.0 |
utf8utf8 | 1.1.41.1.4 | utilsutils | 3.6.33.6.3 | vctrsvctrs | 0.2.40.2.4 |
viridisLiteviridisLite | 0.3.00.3.0 | whiskerwhisker | 0.40.4 | withrwithr | 2.2.02.2.0 |
xfunxfun | 0.130.13 | xml2xml2 | 1.3.11.3.1 | xopenxopen | 1.0.01.0.0 |
xtablextable | 1.8-41.8-4 | yamlyaml | 2.2.12.2.1 |
已安装的 Java 库和 Scala 库(Scala 2.12 群集版本)Installed Java and Scala libraries (Scala 2.12 cluster version)
组 IDGroup ID | 项目 IDArtifact ID | 版本Version |
---|---|---|
antlrantlr | antlrantlr | 2.7.72.7.7 |
com.amazonawscom.amazonaws | amazon-kinesis-clientamazon-kinesis-client | 1.12.01.12.0 |
com.amazonawscom.amazonaws | aws-java-sdk-autoscalingaws-java-sdk-autoscaling | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cloudformationaws-java-sdk-cloudformation | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cloudfrontaws-java-sdk-cloudfront | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cloudhsmaws-java-sdk-cloudhsm | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cloudsearchaws-java-sdk-cloudsearch | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cloudtrailaws-java-sdk-cloudtrail | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cloudwatchaws-java-sdk-cloudwatch | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cloudwatchmetricsaws-java-sdk-cloudwatchmetrics | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-codedeployaws-java-sdk-codedeploy | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cognitoidentityaws-java-sdk-cognitoidentity | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-cognitosyncaws-java-sdk-cognitosync | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-configaws-java-sdk-config | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-coreaws-java-sdk-core | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-datapipelineaws-java-sdk-datapipeline | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-directconnectaws-java-sdk-directconnect | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-directoryaws-java-sdk-directory | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-dynamodbaws-java-sdk-dynamodb | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-ec2aws-java-sdk-ec2 | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-ecsaws-java-sdk-ecs | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-efsaws-java-sdk-efs | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-elasticacheaws-java-sdk-elasticache | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-elasticbeanstalkaws-java-sdk-elasticbeanstalk | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-elasticloadbalancingaws-java-sdk-elasticloadbalancing | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-elastictranscoderaws-java-sdk-elastictranscoder | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-emraws-java-sdk-emr | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-glacieraws-java-sdk-glacier | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-iamaws-java-sdk-iam | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-importexportaws-java-sdk-importexport | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-kinesisaws-java-sdk-kinesis | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-kmsaws-java-sdk-kms | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-lambdaaws-java-sdk-lambda | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-logsaws-java-sdk-logs | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-machinelearningaws-java-sdk-machinelearning | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-opsworksaws-java-sdk-opsworks | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-rdsaws-java-sdk-rds | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-redshiftaws-java-sdk-redshift | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-route53aws-java-sdk-route53 | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-s3aws-java-sdk-s3 | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-sesaws-java-sdk-ses | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-simpledbaws-java-sdk-simpledb | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-simpleworkflowaws-java-sdk-simpleworkflow | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-snsaws-java-sdk-sns | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-sqsaws-java-sdk-sqs | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-ssmaws-java-sdk-ssm | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-storagegatewayaws-java-sdk-storagegateway | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-stsaws-java-sdk-sts | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-supportaws-java-sdk-support | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | aws-java-sdk-swf-librariesaws-java-sdk-swf-libraries | 1.11.221.11.22 |
com.amazonawscom.amazonaws | aws-java-sdk-workspacesaws-java-sdk-workspaces | 1.11.6551.11.655 |
com.amazonawscom.amazonaws | jmespath-javajmespath-java | 1.11.6551.11.655 |
com.chuusaicom.chuusai | shapeless_2.12shapeless_2.12 | 2.3.32.3.3 |
com.clearspring.analyticscom.clearspring.analytics | 流 (stream)stream | 2.9.62.9.6 |
com.databrickscom.databricks | RserveRserve | 1.8-31.8-3 |
com.databrickscom.databricks | jets3tjets3t | 0.7.1-00.7.1-0 |
com.databricks.scalapbcom.databricks.scalapb | compilerplugin_2.12compilerplugin_2.12 | 0.4.15-100.4.15-10 |
com.databricks.scalapbcom.databricks.scalapb | scalapb-runtime_2.12scalapb-runtime_2.12 | 0.4.15-100.4.15-10 |
com.esotericsoftwarecom.esotericsoftware | kryo-shadedkryo-shaded | 4.0.24.0.2 |
com.esotericsoftwarecom.esotericsoftware | minlogminlog | 1.3.01.3.0 |
com.fasterxmlcom.fasterxml | classmateclassmate | 1.3.41.3.4 |
com.fasterxml.jackson.corecom.fasterxml.jackson.core | jackson-annotationsjackson-annotations | 2.10.02.10.0 |
com.fasterxml.jackson.corecom.fasterxml.jackson.core | jackson-corejackson-core | 2.10.02.10.0 |
com.fasterxml.jackson.corecom.fasterxml.jackson.core | jackson-databindjackson-databind | 2.10.02.10.0 |
com.fasterxml.jackson.dataformatcom.fasterxml.jackson.dataformat | jackson-dataformat-cborjackson-dataformat-cbor | 2.10.02.10.0 |
com.fasterxml.jackson.datatypecom.fasterxml.jackson.datatype | jackson-datatype-jodajackson-datatype-joda | 2.10.02.10.0 |
com.fasterxml.jackson.modulecom.fasterxml.jackson.module | jackson-module-paranamerjackson-module-paranamer | 2.10.02.10.0 |
com.fasterxml.jackson.modulecom.fasterxml.jackson.module | jackson-module-scala_2.12jackson-module-scala_2.12 | 2.10.02.10.0 |
com.github.ben-manes.caffeinecom.github.ben-manes.caffeine | caffeinecaffeine | 2.3.42.3.4 |
com.github.fommilcom.github.fommil | jniloaderjniloader | 1.11.1 |
com.github.fommil.netlibcom.github.fommil.netlib | corecore | 1.1.21.1.2 |
com.github.fommil.netlibcom.github.fommil.netlib | native_ref-javanative_ref-java | 1.11.1 |
com.github.fommil.netlibcom.github.fommil.netlib | native_ref-java-nativesnative_ref-java-natives | 1.11.1 |
com.github.fommil.netlibcom.github.fommil.netlib | native_system-javanative_system-java | 1.11.1 |
com.github.fommil.netlibcom.github.fommil.netlib | native_system-java-nativesnative_system-java-natives | 1.11.1 |
com.github.fommil.netlibcom.github.fommil.netlib | netlib-native_ref-linux-x86_64-nativesnetlib-native_ref-linux-x86_64-natives | 1.11.1 |
com.github.fommil.netlibcom.github.fommil.netlib | netlib-native_system-linux-x86_64-nativesnetlib-native_system-linux-x86_64-natives | 1.11.1 |
com.github.joshelsercom.github.joshelser | dropwizard-metrics-hadoop-metrics2-reporterdropwizard-metrics-hadoop-metrics2-reporter | 0.1.20.1.2 |
com.github.lubencom.github.luben | zstd-jnizstd-jni | 1.4.4-31.4.4-3 |
com.github.wendykierpcom.github.wendykierp | JTransformsJTransforms | 3.13.1 |
com.google.code.findbugscom.google.code.findbugs | jsr305jsr305 | 3.0.03.0.0 |
com.google.code.gsoncom.google.code.gson | gsongson | 2.2.42.2.4 |
com.google.flatbufferscom.google.flatbuffers | flatbuffers-javaflatbuffers-java | 1.9.01.9.0 |
com.google.guavacom.google.guava | guavaguava | 15.015.0 |
com.google.protobufcom.google.protobuf | protobuf-javaprotobuf-java | 2.6.12.6.1 |
com.h2databasecom.h2database | h2h2 | 1.4.1951.4.195 |
com.helgercom.helger | profilerprofiler | 1.1.11.1.1 |
com.jcraftcom.jcraft | jschjsch | 0.1.500.1.50 |
com.jolboxcom.jolbox | bonecpbonecp | 0.8.0.RELEASE0.8.0.RELEASE |
com.microsoft.azurecom.microsoft.azure | azure-data-lake-store-sdkazure-data-lake-store-sdk | 2.2.82.2.8 |
com.microsoft.sqlservercom.microsoft.sqlserver | mssql-jdbcmssql-jdbc | 8.2.1.jre88.2.1.jre8 |
com.ningcom.ning | compress-lzfcompress-lzf | 1.0.31.0.3 |
com.sun.mailcom.sun.mail | javax.mailjavax.mail | 1.5.21.5.2 |
com.tdunningcom.tdunning | jsonjson | 1.81.8 |
com.thoughtworks.paranamercom.thoughtworks.paranamer | paranamerparanamer | 2.82.8 |
com.trueaccord.lensescom.trueaccord.lenses | lenses_2.12lenses_2.12 | 0.4.120.4.12 |
com.twittercom.twitter | chill-javachill-java | 0.9.50.9.5 |
com.twittercom.twitter | chill_2.12chill_2.12 | 0.9.50.9.5 |
com.twittercom.twitter | util-app_2.12util-app_2.12 | 7.1.07.1.0 |
com.twittercom.twitter | util-core_2.12util-core_2.12 | 7.1.07.1.0 |
com.twittercom.twitter | util-function_2.12util-function_2.12 | 7.1.07.1.0 |
com.twittercom.twitter | util-jvm_2.12util-jvm_2.12 | 7.1.07.1.0 |
com.twittercom.twitter | util-lint_2.12util-lint_2.12 | 7.1.07.1.0 |
com.twittercom.twitter | util-registry_2.12util-registry_2.12 | 7.1.07.1.0 |
com.twittercom.twitter | util-stats_2.12util-stats_2.12 | 7.1.07.1.0 |
com.typesafecom.typesafe | configconfig | 1.2.11.2.1 |
com.typesafe.scala-loggingcom.typesafe.scala-logging | scala-logging_2.12scala-logging_2.12 | 3.7.23.7.2 |
com.univocitycom.univocity | univocity-parsersunivocity-parsers | 2.8.32.8.3 |
com.zaxxercom.zaxxer | HikariCPHikariCP | 3.1.03.1.0 |
commons-beanutilscommons-beanutils | commons-beanutilscommons-beanutils | 1.9.41.9.4 |
commons-clicommons-cli | commons-clicommons-cli | 1.21.2 |
commons-codeccommons-codec | commons-codeccommons-codec | 1.101.10 |
commons-collectionscommons-collections | commons-collectionscommons-collections | 3.2.23.2.2 |
commons-configurationcommons-configuration | commons-configurationcommons-configuration | 1.61.6 |
commons-dbcpcommons-dbcp | commons-dbcpcommons-dbcp | 1.41.4 |
commons-digestercommons-digester | commons-digestercommons-digester | 1.81.8 |
commons-fileuploadcommons-fileupload | commons-fileuploadcommons-fileupload | 1.3.31.3.3 |
commons-httpclientcommons-httpclient | commons-httpclientcommons-httpclient | 3.13.1 |
commons-iocommons-io | commons-iocommons-io | 2.42.4 |
commons-langcommons-lang | commons-langcommons-lang | 2.62.6 |
commons-loggingcommons-logging | commons-loggingcommons-logging | 1.1.31.1.3 |
commons-netcommons-net | commons-netcommons-net | 3.13.1 |
commons-poolcommons-pool | commons-poolcommons-pool | 1.5.41.5.4 |
info.ganglia.gmetric4jinfo.ganglia.gmetric4j | gmetric4jgmetric4j | 1.0.101.0.10 |
io.airliftio.airlift | aircompressoraircompressor | 0.100.10 |
io.dropwizard.metricsio.dropwizard.metrics | metrics-coremetrics-core | 4.1.14.1.1 |
io.dropwizard.metricsio.dropwizard.metrics | metrics-graphitemetrics-graphite | 4.1.14.1.1 |
io.dropwizard.metricsio.dropwizard.metrics | metrics-healthchecksmetrics-healthchecks | 4.1.14.1.1 |
io.dropwizard.metricsio.dropwizard.metrics | metrics-jetty9metrics-jetty9 | 4.1.14.1.1 |
io.dropwizard.metricsio.dropwizard.metrics | metrics-jmxmetrics-jmx | 4.1.14.1.1 |
io.dropwizard.metricsio.dropwizard.metrics | metrics-jsonmetrics-json | 4.1.14.1.1 |
io.dropwizard.metricsio.dropwizard.metrics | metrics-jvmmetrics-jvm | 4.1.14.1.1 |
io.dropwizard.metricsio.dropwizard.metrics | metrics-servletsmetrics-servlets | 4.1.14.1.1 |
io.nettyio.netty | netty-allnetty-all | 4.1.47.Final4.1.47.Final |
jakarta.annotationjakarta.annotation | jakarta.annotation-apijakarta.annotation-api | 1.3.51.3.5 |
jakarta.validationjakarta.validation | jakarta.validation-apijakarta.validation-api | 2.0.22.0.2 |
jakarta.ws.rsjakarta.ws.rs | jakarta.ws.rs-apijakarta.ws.rs-api | 2.1.62.1.6 |
javax.activationjavax.activation | activationactivation | 1.1.11.1.1 |
javax.eljavax.el | javax.el-apijavax.el-api | 2.2.42.2.4 |
javax.jdojavax.jdo | jdo-apijdo-api | 3.0.13.0.1 |
javax.servletjavax.servlet | javax.servlet-apijavax.servlet-api | 3.1.03.1.0 |
javax.servlet.jspjavax.servlet.jsp | jsp-apijsp-api | 2.12.1 |
javax.transactionjavax.transaction | jtajta | 1.11.1 |
javax.transactionjavax.transaction | transaction-apitransaction-api | 1.11.1 |
javax.xml.bindjavax.xml.bind | jaxb-apijaxb-api | 2.2.22.2.2 |
javax.xml.streamjavax.xml.stream | stax-apistax-api | 1.0-21.0-2 |
javolutionjavolution | javolutionjavolution | 5.5.15.5.1 |
jlinejline | jlinejline | 2.14.62.14.6 |
joda-timejoda-time | joda-timejoda-time | 2.10.52.10.5 |
log4jlog4j | apache-log4j-extrasapache-log4j-extras | 1.2.171.2.17 |
log4jlog4j | log4jlog4j | 1.2.171.2.17 |
net.razorvinenet.razorvine | pyrolitepyrolite | 4.304.30 |
net.sf.jpamnet.sf.jpam | jpamjpam | 1.11.1 |
net.sf.opencsvnet.sf.opencsv | opencsvopencsv | 2.32.3 |
net.sf.supercsvnet.sf.supercsv | super-csvsuper-csv | 2.2.02.2.0 |
net.snowflakenet.snowflake | snowflake-ingest-sdksnowflake-ingest-sdk | 0.9.60.9.6 |
net.snowflakenet.snowflake | snowflake-jdbcsnowflake-jdbc | 3.12.03.12.0 |
net.snowflakenet.snowflake | spark-snowflake_2.12spark-snowflake_2.12 | 2.5.9-spark_2.42.5.9-spark_2.4 |
net.sourceforge.f2jnet.sourceforge.f2j | arpack_combined_allarpack_combined_all | 0.10.1 |
org.acplt.remoteteaorg.acplt.remotetea | remotetea-oncrpcremotetea-oncrpc | 1.1.21.1.2 |
org.antlrorg.antlr | ST4ST4 | 4.0.44.0.4 |
org.antlrorg.antlr | antlr-runtimeantlr-runtime | 3.5.23.5.2 |
org.antlrorg.antlr | antlr4-runtimeantlr4-runtime | 4.7.14.7.1 |
org.antlrorg.antlr | stringtemplatestringtemplate | 3.2.13.2.1 |
org.apache.antorg.apache.ant | antant | 1.9.21.9.2 |
org.apache.antorg.apache.ant | ant-jschant-jsch | 1.9.21.9.2 |
org.apache.antorg.apache.ant | ant-launcherant-launcher | 1.9.21.9.2 |
org.apache.arroworg.apache.arrow | arrow-formatarrow-format | 0.15.10.15.1 |
org.apache.arroworg.apache.arrow | arrow-memoryarrow-memory | 0.15.10.15.1 |
org.apache.arroworg.apache.arrow | arrow-vectorarrow-vector | 0.15.10.15.1 |
org.apache.avroorg.apache.avro | avroavro | 1.8.21.8.2 |
org.apache.avroorg.apache.avro | avro-ipcavro-ipc | 1.8.21.8.2 |
org.apache.avroorg.apache.avro | avro-mapred-hadoop2avro-mapred-hadoop2 | 1.8.21.8.2 |
org.apache.commonsorg.apache.commons | commons-compresscommons-compress | 1.8.11.8.1 |
org.apache.commonsorg.apache.commons | commons-cryptocommons-crypto | 1.0.01.0.0 |
org.apache.commonsorg.apache.commons | commons-lang3commons-lang3 | 3.93.9 |
org.apache.commonsorg.apache.commons | commons-math3commons-math3 | 3.4.13.4.1 |
org.apache.commonsorg.apache.commons | commons-textcommons-text | 1.61.6 |
org.apache.curatororg.apache.curator | curator-clientcurator-client | 2.7.12.7.1 |
org.apache.curatororg.apache.curator | curator-frameworkcurator-framework | 2.7.12.7.1 |
org.apache.curatororg.apache.curator | curator-recipescurator-recipes | 2.7.12.7.1 |
org.apache.derbyorg.apache.derby | derbyderby | 10.12.1.110.12.1.1 |
org.apache.directory.apiorg.apache.directory.api | api-asn1-apiapi-asn1-api | 1.0.0-M201.0.0-M20 |
org.apache.directory.apiorg.apache.directory.api | api-utilapi-util | 1.0.0-M201.0.0-M20 |
org.apache.directory.serverorg.apache.directory.server | apacheds-i18napacheds-i18n | 2.0.0-M152.0.0-M15 |
org.apache.directory.serverorg.apache.directory.server | apacheds-kerberos-codecapacheds-kerberos-codec | 2.0.0-M152.0.0-M15 |
org.apache.hadooporg.apache.hadoop | hadoop-annotationshadoop-annotations | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-authhadoop-auth | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-clienthadoop-client | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-commonhadoop-common | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-hdfshadoop-hdfs | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-mapreduce-client-apphadoop-mapreduce-client-app | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-mapreduce-client-commonhadoop-mapreduce-client-common | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-mapreduce-client-corehadoop-mapreduce-client-core | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-mapreduce-client-jobclienthadoop-mapreduce-client-jobclient | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-mapreduce-client-shufflehadoop-mapreduce-client-shuffle | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-yarn-apihadoop-yarn-api | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-yarn-clienthadoop-yarn-client | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-yarn-commonhadoop-yarn-common | 2.7.42.7.4 |
org.apache.hadooporg.apache.hadoop | hadoop-yarn-server-commonhadoop-yarn-server-common | 2.7.42.7.4 |
org.apache.hiveorg.apache.hive | hive-beelinehive-beeline | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-clihive-cli | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-commonhive-common | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-exec-corehive-exec-core | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-jdbchive-jdbc | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-llap-clienthive-llap-client | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-llap-commonhive-llap-common | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-metastorehive-metastore | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-serdehive-serde | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-shimshive-shims | 2.3.72.3.7 |
org.apache.hiveorg.apache.hive | hive-storage-apihive-storage-api | 2.7.12.7.1 |
org.apache.hiveorg.apache.hive | hive-vector-code-genhive-vector-code-gen | 2.3.72.3.7 |
org.apache.hive.shimsorg.apache.hive.shims | hive-shims-0.23hive-shims-0.23 | 2.3.72.3.7 |
org.apache.hive.shimsorg.apache.hive.shims | hive-shims-commonhive-shims-common | 2.3.72.3.7 |
org.apache.hive.shimsorg.apache.hive.shims | hive-shims-schedulerhive-shims-scheduler | 2.3.72.3.7 |
org.apache.htraceorg.apache.htrace | htrace-corehtrace-core | 3.1.0-incubating3.1.0-incubating |
org.apache.httpcomponentsorg.apache.httpcomponents | httpclienthttpclient | 4.5.64.5.6 |
org.apache.httpcomponentsorg.apache.httpcomponents | httpcorehttpcore | 4.4.124.4.12 |
org.apache.ivyorg.apache.ivy | ivyivy | 2.4.02.4.0 |
org.apache.orcorg.apache.orc | orc-coreorc-core | 1.5.101.5.10 |
org.apache.orcorg.apache.orc | orc-mapreduceorc-mapreduce | 1.5.101.5.10 |
org.apache.orcorg.apache.orc | orc-shimsorc-shims | 1.5.101.5.10 |
org.apache.parquetorg.apache.parquet | parquet-columnparquet-column | 1.10.1.2-databricks41.10.1.2-databricks4 |
org.apache.parquetorg.apache.parquet | parquet-commonparquet-common | 1.10.1.2-databricks41.10.1.2-databricks4 |
org.apache.parquetorg.apache.parquet | parquet-encodingparquet-encoding | 1.10.1.2-databricks41.10.1.2-databricks4 |
org.apache.parquetorg.apache.parquet | parquet-formatparquet-format | 2.4.02.4.0 |
org.apache.parquetorg.apache.parquet | parquet-hadoopparquet-hadoop | 1.10.1.2-databricks41.10.1.2-databricks4 |
org.apache.parquetorg.apache.parquet | parquet-jacksonparquet-jackson | 1.10.1.2-databricks41.10.1.2-databricks4 |
org.apache.thriftorg.apache.thrift | libfb303libfb303 | 0.9.30.9.3 |
org.apache.thriftorg.apache.thrift | libthriftlibthrift | 0.12.00.12.0 |
org.apache.velocityorg.apache.velocity | 速度velocity | 1.51.5 |
org.apache.xbeanorg.apache.xbean | xbean-asm7-shadedxbean-asm7-shaded | 4.154.15 |
org.apache.yetusorg.apache.yetus | audience-annotationsaudience-annotations | 0.5.00.5.0 |
org.apache.zookeeperorg.apache.zookeeper | zookeeperzookeeper | 3.4.143.4.14 |
org.codehaus.jacksonorg.codehaus.jackson | jackson-core-asljackson-core-asl | 1.9.131.9.13 |
org.codehaus.jacksonorg.codehaus.jackson | jackson-jaxrsjackson-jaxrs | 1.9.131.9.13 |
org.codehaus.jacksonorg.codehaus.jackson | jackson-mapper-asljackson-mapper-asl | 1.9.131.9.13 |
org.codehaus.jacksonorg.codehaus.jackson | jackson-xcjackson-xc | 1.9.131.9.13 |
org.codehaus.janinoorg.codehaus.janino | commons-compilercommons-compiler | 3.0.163.0.16 |
org.codehaus.janinoorg.codehaus.janino | janinojanino | 3.0.163.0.16 |
org.datanucleusorg.datanucleus | datanucleus-api-jdodatanucleus-api-jdo | 4.2.44.2.4 |
org.datanucleusorg.datanucleus | datanucleus-coredatanucleus-core | 4.1.174.1.17 |
org.datanucleusorg.datanucleus | datanucleus-rdbmsdatanucleus-rdbms | 4.1.194.1.19 |
org.datanucleusorg.datanucleus | javax.jdojavax.jdo | 3.2.0-m33.2.0-m3 |
org.eclipse.jettyorg.eclipse.jetty | jetty-clientjetty-client | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-continuationjetty-continuation | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-httpjetty-http | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-iojetty-io | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-jndijetty-jndi | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-plusjetty-plus | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-proxyjetty-proxy | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-securityjetty-security | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-serverjetty-server | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-servletjetty-servlet | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-servletsjetty-servlets | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-utiljetty-util | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-webappjetty-webapp | 9.4.18.v201904299.4.18.v20190429 |
org.eclipse.jettyorg.eclipse.jetty | jetty-xmljetty-xml | 9.4.18.v201904299.4.18.v20190429 |
org.fusesource.leveldbjniorg.fusesource.leveldbjni | leveldbjni-allleveldbjni-all | 1.81.8 |
org.glassfish.hk2org.glassfish.hk2 | hk2-apihk2-api | 2.6.12.6.1 |
org.glassfish.hk2org.glassfish.hk2 | hk2-locatorhk2-locator | 2.6.12.6.1 |
org.glassfish.hk2org.glassfish.hk2 | hk2-utilshk2-utils | 2.6.12.6.1 |
org.glassfish.hk2org.glassfish.hk2 | osgi-resource-locatorosgi-resource-locator | 1.0.31.0.3 |
org.glassfish.hk2.externalorg.glassfish.hk2.external | aopalliance-repackagedaopalliance-repackaged | 2.6.12.6.1 |
org.glassfish.hk2.externalorg.glassfish.hk2.external | jakarta.injectjakarta.inject | 2.6.12.6.1 |
org.glassfish.jersey.containersorg.glassfish.jersey.containers | jersey-container-servletjersey-container-servlet | 2.302.30 |
org.glassfish.jersey.containersorg.glassfish.jersey.containers | jersey-container-servlet-corejersey-container-servlet-core | 2.302.30 |
org.glassfish.jersey.coreorg.glassfish.jersey.core | jersey-clientjersey-client | 2.302.30 |
org.glassfish.jersey.coreorg.glassfish.jersey.core | jersey-commonjersey-common | 2.302.30 |
org.glassfish.jersey.coreorg.glassfish.jersey.core | jersey-serverjersey-server | 2.302.30 |
org.glassfish.jersey.injectorg.glassfish.jersey.inject | jersey-hk2jersey-hk2 | 2.302.30 |
org.glassfish.jersey.mediaorg.glassfish.jersey.media | jersey-media-jaxbjersey-media-jaxb | 2.302.30 |
org.hibernate.validatororg.hibernate.validator | hibernate-validatorhibernate-validator | 6.1.0.Final6.1.0.Final |
org.javassistorg.javassist | javassistjavassist | 3.25.0-GA3.25.0-GA |
org.jboss.loggingorg.jboss.logging | jboss-loggingjboss-logging | 3.3.2.Final3.3.2.Final |
org.jdbiorg.jdbi | jdbijdbi | 2.63.12.63.1 |
org.jodaorg.joda | joda-convertjoda-convert | 1.71.7 |
org.joddorg.jodd | jodd-corejodd-core | 3.5.23.5.2 |
org.json4sorg.json4s | json4s-ast_2.12json4s-ast_2.12 | 3.6.63.6.6 |
org.json4sorg.json4s | json4s-core_2.12json4s-core_2.12 | 3.6.63.6.6 |
org.json4sorg.json4s | json4s-jackson_2.12json4s-jackson_2.12 | 3.6.63.6.6 |
org.json4sorg.json4s | json4s-scalap_2.12json4s-scalap_2.12 | 3.6.63.6.6 |
org.lz4org.lz4 | lz4-javalz4-java | 1.7.11.7.1 |
org.mariadb.jdbcorg.mariadb.jdbc | mariadb-java-clientmariadb-java-client | 2.1.22.1.2 |
org.objenesisorg.objenesis | objenesisobjenesis | 2.5.12.5.1 |
org.postgresqlorg.postgresql | postgresqlpostgresql | 42.1.442.1.4 |
org.roaringbitmaporg.roaringbitmap | RoaringBitmapRoaringBitmap | 0.7.450.7.45 |
org.roaringbitmaporg.roaringbitmap | shimsshims | 0.7.450.7.45 |
org.rocksdborg.rocksdb | rocksdbjnirocksdbjni | 6.2.26.2.2 |
org.rosuda.REngineorg.rosuda.REngine | REngineREngine | 2.1.02.1.0 |
org.scala-langorg.scala-lang | scala-compiler_2.12scala-compiler_2.12 | 2.12.102.12.10 |
org.scala-langorg.scala-lang | scala-library_2.12scala-library_2.12 | 2.12.102.12.10 |
org.scala-langorg.scala-lang | scala-reflect_2.12scala-reflect_2.12 | 2.12.102.12.10 |
org.scala-lang.modulesorg.scala-lang.modules | scala-collection-compat_2.12scala-collection-compat_2.12 | 2.1.12.1.1 |
org.scala-lang.modulesorg.scala-lang.modules | scala-parser-combinators_2.12scala-parser-combinators_2.12 | 1.1.21.1.2 |
org.scala-lang.modulesorg.scala-lang.modules | scala-xml_2.12scala-xml_2.12 | 1.2.01.2.0 |
org.scala-sbtorg.scala-sbt | test-interfacetest-interface | 1.01.0 |
org.scalacheckorg.scalacheck | scalacheck_2.12scalacheck_2.12 | 1.14.21.14.2 |
org.scalacticorg.scalactic | scalactic_2.12scalactic_2.12 | 3.0.83.0.8 |
org.scalanlporg.scalanlp | breeze-macros_2.12breeze-macros_2.12 | 1.01.0 |
org.scalanlporg.scalanlp | breeze_2.12breeze_2.12 | 1.01.0 |
org.scalatestorg.scalatest | scalatest_2.12scalatest_2.12 | 3.0.83.0.8 |
org.slf4jorg.slf4j | jcl-over-slf4jjcl-over-slf4j | 1.7.301.7.30 |
org.slf4jorg.slf4j | jul-to-slf4jjul-to-slf4j | 1.7.301.7.30 |
org.slf4jorg.slf4j | slf4j-apislf4j-api | 1.7.301.7.30 |
org.slf4jorg.slf4j | slf4j-log4j12slf4j-log4j12 | 1.7.301.7.30 |
org.spark-project.sparkorg.spark-project.spark | unusedunused | 1.0.01.0.0 |
org.springframeworkorg.springframework | spring-corespring-core | 4.1.4.RELEASE4.1.4.RELEASE |
org.springframeworkorg.springframework | spring-testspring-test | 4.1.4.RELEASE4.1.4.RELEASE |
org.threetenorg.threeten | threeten-extrathreeten-extra | 1.5.01.5.0 |
org.tukaaniorg.tukaani | xzxz | 1.51.5 |
org.typelevelorg.typelevel | algebra_2.12algebra_2.12 | 2.0.0-M22.0.0-M2 |
org.typelevelorg.typelevel | cats-kernel_2.12cats-kernel_2.12 | 2.0.0-M42.0.0-M4 |
org.typelevelorg.typelevel | machinist_2.12machinist_2.12 | 0.6.80.6.8 |
org.typelevelorg.typelevel | macro-compat_2.12macro-compat_2.12 | 1.1.11.1.1 |
org.typelevelorg.typelevel | spire-macros_2.12spire-macros_2.12 | 0.17.0-M10.17.0-M1 |
org.typelevelorg.typelevel | spire-platform_2.12spire-platform_2.12 | 0.17.0-M10.17.0-M1 |
org.typelevelorg.typelevel | spire-util_2.12spire-util_2.12 | 0.17.0-M10.17.0-M1 |
org.typelevelorg.typelevel | spire_2.12spire_2.12 | 0.17.0-M10.17.0-M1 |
org.xerialorg.xerial | sqlite-jdbcsqlite-jdbc | 3.8.11.23.8.11.2 |
org.xerial.snappyorg.xerial.snappy | snappy-javasnappy-java | 1.1.7.51.1.7.5 |
org.yamlorg.yaml | snakeyamlsnakeyaml | 1.241.24 |
orooro | orooro | 2.0.82.0.8 |
pl.edu.icmpl.edu.icm | JLargeArraysJLargeArrays | 1.51.5 |
software.amazon.ionsoftware.amazon.ion | ion-javaion-java | 1.0.21.0.2 |
staxstax | stax-apistax-api | 1.0.11.0.1 |
xmlencxmlenc | xmlencxmlenc | 0.520.52 |