Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Applies to: Databricks SQL
Databricks Runtime 12.2 and above
Error classes are descriptive, human-readable, strings unique to the error condition.
You can use error classes to programmatically handle errors in your application without the need to parse the error message.
This is a list of common, named error conditions returned by Azure Databricks.
Failed to execute <statementType>
command because DEFAULT
values are not supported when adding new columns to previously existing target data source with table provider: "<dataSource>
".
Non-deterministic expression <sqlExpr>
should not appear in the arguments of an aggregate function.
Failed to parse model output when casting to the specified returnType: "<dataType>
", response JSON was: "<responseString>
". Please update the returnType to match the contents of the type represented by the response JSON and then retry the query again.
The actual model output has more than one column "<responseString>
". However, the specified return type["<dataType>
"] has only one column. Please update the returnType to contain the same number of columns as the model output and then retry the query again.
Error occurred while making an HTTP request for function <funcName>
: <errorMessage>
Invalid HTTP response for function <funcName>
: <errorMessage>
The maximum number of words must be a non-negative integer, but got <maxWords>
.
The provided model parameters (<modelParameters>
) are invalid in the AI_QUERY
function for serving endpoint "<endpointName>
".
For more details see AI_FUNCTION_INVALID_MODEL_PARAMETERS
AI function: "<functionName>
" requires valid JSON string for responseFormat
parameter, but found the following response format: "<invalidResponseFormat>
".
Error occurred while parsing the JSON response for function <funcName>
: <errorMessage>
Failed to parse the schema for the serving endpoint "<endpointName>
": <errorMessage>
, response JSON was: "<responseJson>
".
Set the returnType
parameter manually in the AI_QUERY
function to override schema resolution.
The function <funcName>
is not supported in the current environment. It is only available in Databricks SQL Pro and Serverless.
Failed to evaluate the SQL function "<functionName>
" because the provided argument of <invalidValue>
has "<invalidDataType>
", but only the following types are supported: <supportedDataTypes>
. Please update the function call to provide an argument of string type and retry the query again.
AI function: "<functionName>
" does not support the type "<invalidResponseFormatType>
" of the following response format: "<invalidResponseFormat>
". Supported types of the response format are: <supportedResponseFormatTypes>
.
AI function: "<functionName>
" does not support the following type as return type: "<typeName>
". Return type must be a valid SQL type understood by Catalyst and supported by AI function. Current supported types includes: <supportedValues>
Provided value "<argValue>
" is not supported by argument "<argName>
". Supported values are: <supportedValues>
Expected the serving endpoint task type to be "Chat" for structured output support, but found "<taskType>
" for the endpoint "<endpointName>
".
Provided "<sqlExpr>
" is not supported by the argument returnType.
Conflicting parameters detected for vector_search
SQL function: <conflictParamNames>
, please specify one from: <parameterNames>
.
vector_search
SQL function with embedding column type <embeddingColumnType>
is not supported.
vector_search
SQL function is missing query input parameter, please specify one from: <parameterNames>
.
vector_search
SQL function with index type <indexType>
is not supported.
Failure to materialize vector_search
SQL function query from spark type <dataType>
to scala-native objects during request-encoding with error: <errorMessage>
.
vector_search
SQL function with num_results larger than <maxLimit>
is not supported. The limit specified was <requestedLimit>
. Please try again with num_results <= <maxLimit>
Using name parameterized queries requires all parameters to be named. Parameters missing names: <exprs>
.
Cannot use all columns for partition columns.
Cannot alter <scheduleType>
on a table without an existing schedule or trigger. Please add a schedule or trigger to the table before attempting to alter it.
ALTER TABLE <type>
column <columnName>
specifies descriptor "<optionName>
" more than once, which is invalid.
Name <name>
is ambiguous in nested CTE.
Please set <config>
to "CORRECTED
" so that name defined in inner CTE takes precedence. If set it to "LEGACY
", outer CTE definitions will take precedence.
See https://spark.apache.org/docs/latest/sql-migration-guide.html#query-engine'.
Column or field <name>
is ambiguous and has <n>
matches.
Column <name>
is ambiguous. It's because you joined several DataFrame together, and some of these DataFrames are the same.
This column points to one of the DataFrames but Spark is unable to figure out which one.
Please alias the DataFrames with different names via DataFrame.alias
before joining them,
and specify the column using qualified name, e.g. df.alias("a").join(df.alias("b"), col("a.id") > col("b.id"))
.
Ambiguous reference to constraint <constraint>
.
Lateral column alias <name>
is ambiguous and has <n>
matches.
Reference <name>
is ambiguous, could be: <referenceNames>
.
Ambiguous reference to the field <field>
. It appears <count>
times in the schema.
The single-pass analyzer cannot process this query or command because the extension choice for <operator>
is ambiguous: <extensions>
.
Please contact Databricks support.
ANALYZE CONSTRAINTS
is not supported.
The ANSI SQL configuration <config>
cannot be disabled in this product.
AQE thread is interrupted, probably due to query cancellation by user.
The function <functionName>
includes a parameter <parameterName>
at position <pos>
that requires a constant argument. Please compute the argument <sqlExpr>
separately and pass the result as a constant.
<message>
.<alternative>
If necessary set <config>
to "false" to bypass this error.
For more details see ARITHMETIC_OVERFLOW
The number of columns or variables assigned or aliased: <numTarget>
does not match the number of source expressions: <numExpr>
.
Invalid as-of join.
For more details see AS_OF_JOIN
The use of default values is not supported whenrescuedDataColumn
is enabled. You may be able to remove this check by setting spark.databricks.sql.avro.rescuedDataBlockUserDefinedSchemaDefaultValue
to false, but the default values will not apply and null values will still be used.
Cannot convert Avro <avroPath>
to SQL <sqlPath>
because the original encoded data type is <avroType>
, however you're trying to read the field as <sqlType>
, which would lead to an incorrect answer.
To allow reading this field, enable the SQL configuration: "spark.sql.legacy.avro.allowIncompatibleSchema".
The use of positional field matching is not supported when either rescuedDataColumn
or failOnUnknownFields
is enabled. Remove these options to proceed.
Unable to find batch <batchMetadataFile>
.
BigQuery connection credentials must be specified with either the 'GoogleServiceAccountKeyJson' parameter or all of 'projectId', 'OAuthServiceAcctEmail', 'OAuthPvtKey'
<value1> <symbol> <value2>
caused overflow. Use <functionName>
to ignore overflow problem and return NULL
.
Boolean statement <invalidStatement>
is invalid. Expected single row with a value of the BOOLEAN
type, but got an empty row.
<operation>
doesn't support built-in catalogs.
The method <methodName>
can not be called on streaming Dataset/DataFrame.
ALTER TABLE (ALTER|CHANGE) COLUMN
cannot change collation of type/subtypes of bucket columns, but found the bucket column <columnName>
in the table <tableName>
.
ALTER TABLE (ALTER|CHANGE) COLUMN
is not supported for partition columns, but found the partition column <columnName>
in the table <tableName>
.
Watermark needs to be defined to reassign event time column. Failed to find watermark definition in the streaming query.
Cannot cast <sourceType>
to <targetType>
.
Cannot convert Protobuf <protobufColumn>
to SQL <sqlColumn>
because schema is incompatible (protobufType = <protobufType>
, sqlType = <sqlType>
).
Unable to convert <protobufType>
of Protobuf to SQL type <toType>
.
Cannot convert SQL <sqlColumn>
to Protobuf <protobufColumn>
because schema is incompatible (protobufType = <protobufType>
, sqlType = <sqlType>
).
Cannot convert SQL <sqlColumn>
to Protobuf <protobufColumn>
because <data>
is not in defined values for enum: <enumString>
.
Cannot copy catalog state like current database and temporary views from Unity Catalog to a legacy catalog.
Failed to create data source table <tableName>
:
For more details see CANNOT_CREATE_DATA_SOURCE_TABLE
The provided URL cannot be decoded: <url>
. Please ensure that the URL is properly formatted and try again.
System owned <resourceType>
cannot be deleted.
Cannot drop the constraint with the name <constraintName>
shared by a CHECK
constraint
and a PRIMARY KEY
or FOREIGN KEY
constraint. You can drop the PRIMARY KEY
or
FOREIGN KEY
constraint by queries:
ALTER TABLE
.. DROP PRIMARY KEY
or
ALTER TABLE
.. DROP FOREIGN KEY
..
Cannot establish connection to remote <jdbcDialectName>
database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please check your workspace's network setup and ensure it does not have outbound restrictions to the host. Please also check that the host does not block inbound connections from the network where the workspace's Spark clusters are deployed. ** Detailed error message: <causeErrorMessage>
.
Cannot establish connection to remote <jdbcDialectName>
database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please allow inbound traffic from the Internet to your host, as you are using Serverless Compute. If your network policies do not allow inbound Internet traffic, please use non Serverless Compute, or you may reach out to your Databricks representative to learn about Serverless Private Networking. ** Detailed error message: <causeErrorMessage>
.
Dataset transformations and actions can only be invoked by the driver, not inside of other Dataset transformations; for example, dataset1.map(x => dataset2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the dataset1.map transformation. For more information, see SPARK
-28702.
Cannot load class <className>
when registering the function <functionName>
, please make sure it is on the classpath.
Could not load Protobuf class with name <protobufClassName>
. <explanation>
.
An error occurred during loading state.
For more details see CANNOT_LOAD_STATE_STORE
Failed to merge incompatible data types <left>
and <right>
. Please check the data types of the columns being merged and ensure that they are compatible. If necessary, consider casting the columns to compatible data types before attempting the merge.
Failed merging schemas:
Initial schema:
<left>
Schema that cannot be merged with the initial schema:
<right>
.
Cannot modify the value of the Spark config: <key>
.
See also https://spark.apache.org/docs/latest/sql-migration-guide.html#ddl-statements'.
Cannot parse decimal. Please ensure that the input is a valid number with optional decimal point or comma separators.
Unable to parse <intervalString>
. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format. If the issue persists, please double check that the input value is not null or empty and try again.
Cannot parse the field name <fieldName>
and the value <fieldValue>
of the JSON token type <jsonType>
to target Spark data type <dataType>
.
Error parsing descriptor bytes into Protobuf FileDescriptorSet.
<message>
. If necessary set <ansiConfig>
to "false" to bypass this error.
Cannot query MV/ST during initialization.
For more details see CANNOT_QUERY_TABLE_DURING_INITIALIZATION
Cannot read file at path <path>
because it has been archived. Please adjust your query filters to exclude archived files.
Cannot read <format>
file at path: <path>
.
For more details see CANNOT_READ_FILE
Cannot read sensitive key '<key>
' from secure provider.
Cannot recognize hive type string: <fieldType>
, column: <fieldName>
. The specified data type for the field cannot be recognized by Spark SQL. Please check the data type of the specified field and ensure that it is a valid Spark SQL data type. Refer to the Spark SQL documentation for a list of valid data types and their format. If the data type is correct, please ensure that you are using a supported version of Spark SQL.
Cannot reference a Unity Catalog <objType>
in Hive Metastore objects.
Cannot remove reserved property: <property>
.
Renaming a <type>
across catalogs is not allowed.
Renaming a <type>
across schemas is not allowed.
Cannot resolve dataframe column <name>
. It's probably because of illegal references like df1.select(df2.col("a"))
.
Cannot resolve <targetString>
.* given input columns <columns>
. Please check that the specified table or struct exists and is accessible in the input columns.
Failed to set permissions on created path <path>
back to <permission>
.
Cannot shallow-clone tables across Unity Catalog and Hive Metastore.
Cannot shallow-clone a table <table>
that is already a shallow clone.
Shallow clone is only supported for the MANAGED
table type. The table <table>
is not MANAGED
table.
Cannot update <table>
field <fieldName>
type:
For more details see CANNOT_UPDATE_FIELD
Cannot up cast <expression>
from <sourceType>
to <targetType>
.
<details>
Cannot load Kryo serialization codec. Kryo serialization cannot be used in the Spark Connect client. Use Java serialization, provide a custom Codec, or use Spark Classic instead.
Validation of <jdbcDialectName>
connection is not supported. Please contact Databricks support for alternative solutions, or set "spark.databricks.testConnectionBeforeCreation" to "false" to skip connection testing before creating a connection object.
Error writing state store files for provider <providerClass>
.
For more details see CANNOT_WRITE_STATE_STORE
The value <expression>
of the type <sourceType>
cannot be cast to <targetType>
because it is malformed. Correct the value as per the syntax, or change its target type. Use try_cast
to tolerate malformed input and return NULL
instead.
For more details see CAST_INVALID_INPUT
The value <value>
of the type <sourceType>
cannot be cast to <targetType>
due to an overflow. Use try_cast
to tolerate overflow and return NULL
instead.
Fail to assign a value of <sourceType>
type to the <targetType>
type column or variable <columnName>
due to an overflow. Use try_cast
on the input value to tolerate overflow and return NULL
instead.
The catalog <catalogName>
not found. Consider to set the SQL config <config>
to a catalog plugin.
Checkpoint block <rddBlockId>
not found!
Either the executor that originally checkpointed this partition is no longer alive, or the original RDD is unpersisted.
If this problem persists, you may consider using rdd.checkpoint()
instead, which is slower than local checkpointing but more fault-tolerant.
Cannot have circular references in class, but got the circular reference of class <t>
.
<className>
must override either <method1>
or <method2>
.
MapObjects
does not support the class <cls>
as resulting collection.
Clean Room commands are not supported
Invalid name to reference a <type>
inside a Clean Room. Use a <type>
's name inside the clean room following the format of [catalog].[schema].[<type>
].
If you are unsure about what name to use, you can run "SHOW ALL IN CLEANROOM
[clean_room]" and use the value in the "name" column.
A file notification was received for file: <filePath>
but it does not exist anymore. Please ensure that files are not deleted before they are processed. To continue your stream, you can set the Spark SQL configuration <config>
to true.
Cloud provider error: <message>
Specified clustering does not match that of the existing table <tableName>
.
Specified clustering columns: [<specifiedClusteringString>
].
Existing clustering columns: [<existingClusteringString>
].
'<operation>
' does not support clustering.
Please contact your Databricks representative to enable the cluster-by-auto feature.
Please enable clusteringTable.enableClusteringTableFeature to use CLUSTER BY
AUTO.
CLUSTER BY
AUTO requires Predictive Optimization to be enabled.
CLUSTER BY
AUTO is only supported on UC Managed tables.
The codec <codecName>
is not available.
For more details see CODEC_NOT_AVAILABLE
Cannot find a short name for the codec <codecName>
.
The value <collationName>
does not represent a correct collation name. Suggested valid collation names: [<proposals>
].
The value <provider>
does not represent a correct collation provider. Supported providers are: [<supportedProviders>
].
Could not determine which collation to use for string functions and operators.
For more details see COLLATION_MISMATCH
Can't create array with <numberOfElements>
elements which exceeding the array size limit <maxRoundedArrayLength>
,
For more details see COLLECTION_SIZE_LIMIT_EXCEEDED
Column aliases are not allowed in <op>
.
The column <columnName>
already exists. Choose another name or rename the existing column.
Some values in field <pos>
are incompatible with the column array type. Expected type <type>
.
Creating CHECK
constraint on table <tableName>
with column mask policies is not supported.
A <statementType>
statement attempted to assign a column mask policy to a column which included two or more other referenced columns in the USING COLUMNS
list with the same name <columnName>
, which is invalid.
Column mask policies for <tableName>
are not supported:
For more details see COLUMN_MASKS_FEATURE_NOT_SUPPORTED
Unable to <statementType> <columnName>
from table <tableName>
because it's referenced in a column mask policy for column <maskedColumn>
. The table owner must remove or alter this policy before proceeding.
MERGE INTO
operations do not support column mask policies in source table <tableName>
.
MERGE INTO
operations do not support writing into table <tableName>
with column mask policies.
This statement attempted to assign a column mask policy to a column <columnName>
with multiple name parts, which is invalid.
This statement attempted to assign a column mask policy to a column and the USING COLUMNS
list included the name <columnName>
with multiple name parts, which is invalid.
Support for defining column masks is not enabled
Column mask policies are only supported in Unity Catalog.
SHOW PARTITIONS
command is not supported for<format>
tables with column masks.
<mode>
clone from table <tableName>
with column mask policies is not supported.
<mode>
clone to table <tableName>
with column mask policies is not supported.
Using a constant as a parameter in a column mask policy is not supported. Please update your SQL command to remove the constant from the column mask definition and then retry the command again.
Failed to execute <statementType>
command because assigning column mask policies is not supported for target data source with table provider: "<provider>
".
Cannot perform <operation>
for table <tableName>
because it contains one or more column mask policies with subquery expression(s), which are not yet supported. Please contact the owner of the table to update the column mask policies in order to continue.
The column <columnName>
had the same name as the target column, which is invalid; please remove the column from the USING COLUMNS
list and retry the command.
<colType>
column <colName>
is not defined in table <tableName>
, defined table columns are: <tableCols>
.
The column <colName>
cannot be found. Verify the spelling and correctness of the column name according to the SQL config <caseSensitiveConfig>
.
Column ordinal out of bounds. The number of columns in the table is <attributesLength>
, but the column ordinal is <ordinal>
.
Attributes are the following: <attributes>
.
Unexpected ',' before constraint(s) definition. Ensure that the constraint clause does not start with a comma when columns (and expectations) are not defined.
The COMMENT ON CONNECTION
command is not implemented yet
The comparator has returned a NULL
for a comparison between <firstValue>
and <secondValue>
.
It should return a positive integer for "greater than", 0 for "equal" and a negative integer for "less than".
To revert to deprecated behavior where NULL
is treated as 0 (equal), you must set "spark.sql.legacy.allowNullComparisonResultInArraySort" to "true".
Cannot process input data types for the expression: <expression>
.
For more details see COMPLEX_EXPRESSION_UNSUPPORTED_INPUT
Another instance of this query [id: <queryId>
] was just started by a concurrent session [existing runId: <existingQueryRunId>
new runId: <newQueryRunId>
].
Concurrent update to the log. Multiple streaming jobs detected for <batchId>
.
Please make sure only one streaming job runs on a specific checkpoint location at a time.
Configuration <config>
is not available.
Conflicting directory structures detected.
Suspicious paths:
<discoveredBasePaths>
If provided paths are partition directories, please set "basePath" in the options of the data source to specify the root directory of the table.
If there are multiple root directories, please load them separately and then union them.
Conflicting partition column names detected:
<distinctPartColLists>
For partitioned table directories, data files should only live in leaf directories.
And directories at the same level should have the same partition column name.
Please check the following directories for unexpected files or inconsistent partition column names:
<suspiciousPaths>
The specified provider <provider>
is inconsistent with the existing catalog provider <expectedProvider>
. Please use 'USING <expectedProvider>
' and retry the command.
Generic Spark Connect error.
For more details see CONNECT
Cannot create connection <connectionName>
because it already exists.
Choose a different name, drop or replace the existing connection, or add the IF NOT EXISTS
clause to tolerate pre-existing connections.
Cannot execute this command because the connection name must be non-empty.
Cannot execute this command because the connection name <connectionName>
was not found.
Connections of type '<connectionType>
' do not support the following option(s): <optionsNotSupported>
. Supported options: <allowedOptions>
.
Cannot create connection of type '<connectionType>
. Supported connection types: <allowedTypes>
.
Table constraints are only supported in Unity Catalog.
The value <str> (<fmt>
) cannot be converted to <targetType>
because it is malformed. Correct the value as per the syntax, or change its format. Use <suggestion>
to tolerate malformed input and return NULL
instead.
Cannot write to <tableName>
, the reason is
For more details see COPY_INTO_COLUMN_ARITY_MISMATCH
Invalid scheme <scheme>
. COPY INTO
source credentials currently only supports s3/s3n/s3a/wasbs/abfss.
COPY INTO
source credentials must specify <keyList>
.
Duplicated files were committed in a concurrent COPY INTO
operation. Please try again later.
Invalid scheme <scheme>
. COPY INTO
source encryption currently only supports s3/s3n/s3a/abfss.
COPY INTO
encryption only supports ADLS Gen2, or abfss:// file scheme
COPY INTO
source encryption must specify '<key>
'.
Invalid encryption option <requiredKey>
. COPY INTO
source encryption must specify '<requiredKey>
' = '<keyValue>
'.
The COPY INTO
feature '<feature>
' is not compatible with '<incompatibleSetting>
'.
COPY INTO
other than appending data is not allowed to run concurrently with other transactions. Please try again later.
COPY INTO
failed to load its state, maximum retries exceeded.
A schema mismatch was detected while copying into the Delta table (Table: <table>
).
This may indicate an issue with the incoming data, or the Delta table schema can be evolved automatically according to the incoming data by setting:
COPY_OPTIONS
('mergeSchema' = 'true')
Schema difference:
<schemaDiff>
The format of the source files must be one of CSV, JSON, AVRO, ORC, PARQUET
, TEXT, or BINARYFILE
. Using COPY INTO
on Delta tables as the source is not supported as duplicate data may be ingested after OPTIMIZE
operations. This check can be turned off by running the SQL command set spark.databricks.delta.copyInto.formatCheck.enabled = false
.
The source directory did not contain any parsable files of type <format>
. Please check the contents of '<source>
'.
The error can be silenced by setting '<config>
' to 'false'.
An internal error occurred while processing COPY INTO
state.
For more details see COPY_INTO_STATE_INTERNAL_ERROR
Failed to parse the COPY INTO
command.
For more details see COPY_INTO_SYNTAX_ERROR
The COPY INTO
feature '<feature>
' is not supported.
Cannot unload data in format '<formatType>
'. Supported formats for <connectionType>
are: <allowedFormats>
.
The CREATE FOREIGN SCHEMA
command is not implemented yet
The CREATE FOREIGN TABLE
command is not implemented yet
Cannot CREATE
OR REFRESH
materialized views or streaming tables with ASYNC
specified. Please remove ASYNC
from the CREATE
OR REFRESH
statement or use REFRESH ASYNC
to refresh existing materialized views or streaming tables asynchronously.
Not allowed to create the permanent view <name>
without explicitly assigning an alias for the expression <attr>
.
CREATE TABLE
column <columnName>
specifies descriptor "<optionName>
" more than once, which is invalid.
Cannot create view <viewName>
, the reason is
For more details see CREATE_VIEW_COLUMN_ARITY_MISMATCH
Please provide credentials when creating or updating external locations.
The CSV option enforceSchema
cannot be set when using rescuedDataColumn
or failOnUnknownFields
, as columns are read by name rather than ordinal.
Cyclic function reference detected: <path>
.
Databricks Delta is not enabled in your account.<hints>
Cannot resolve <sqlExpr>
due to data type mismatch:
For more details see DATATYPE_MISMATCH
DataType <type>
requires a length parameter, for example <type>
(10). Please specify the length.
Write Lineage unsuccessful: missing corresponding relation with policies for CLM/RLS.
Data source '<provider>
' already exists. Please choose a different name for the new data source.
Encountered error when saving to external data source.
Data source '<provider>
' not found. Please make sure the data source is registered.
Failed to find the data source: <provider>
. Make sure the provider name is correct and the package is properly registered and compatible with your Spark version.
Option <option>
must not be empty and should not contain invalid characters, query strings, or parameters.
Option <option>
is required.
The schema of the data source table does not match the expected schema. If you are using the DataFrameReader.schema API or creating a table, avoid specifying the schema.
Data Source schema: <dsSchema>
Expected schema: <expectedSchema>
JDBC URL is not allowed in data source options, please specify 'host', 'port', and 'database' options instead.
<rangeMessage>
. If necessary set <ansiConfig>
to "false" to bypass this error.
Datetime operation overflow: <operation>
.
You have exceeded the API quota for the data source <sourceName>
.
For more details see DC_API_QUOTA_EXCEEDED
Failed to make a connection to the <sourceName>
source. Error code: <errorCode>
.
For more details see DC_CONNECTION_ERROR
Error happened in Dynamics API calls, errorCode: <errorCode>
.
For more details see DC_DYNAMICS_API_ERROR
Error happened in Netsuite JDBC calls, errorCode: <errorCode>
.
For more details see DC_NETSUITE_ERROR
SQLSTATE: none assigned
A schema change has occurred in table <tableName>
of the <sourceName>
source.
For more details see DC_SCHEMA_CHANGE_ERROR
Error happened in ServiceNow API calls, errorCode: <errorCode>
.
For more details see DC_SERVICENOW_API_ERROR
Ingestion for object <objName>
is incomplete because the Salesforce API query job took too long, failed, or was manually cancelled.
To try again, you can either re-run the entire pipeline or refresh this specific destination table. If the error persists, file a ticket. Job ID: <jobId>
. Job status: <jobStatus>
.
Error happened in Sharepoint API calls, errorCode: <errorCode>
.
For more details see DC_SHAREPOINT_API_ERROR
An error occurred in the <sourceName>
API call. Source API type: <apiType>
. Error code: <errorCode>
.
This can sometimes happen when you've reached a <sourceName>
API limit. If you haven't exceeded your API limit, try re-running the connector. If the issue persists, please file a ticket.
Unsupported error happened in data source <sourceName>
.
For more details see DC_UNSUPPORTED_ERROR
Error happened in Workday RAAS API calls, errorCode: <errorCode>
.
For more details see DC_WORKDAY_RAAS_API_ERROR
Decimal precision <precision>
exceeds max precision <maxPrecision>
.
Default database <defaultDatabase>
does not exist, please create it first or change default database to <defaultDatabase>
.
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE
tableName' command in SQL or by recreating the Dataset/DataFrame involved. If disk cache is stale or the underlying files have been removed, you can invalidate disk cache manually by restarting the cluster.
A DEFAULT
keyword in a MERGE
, INSERT
, UPDATE
, or SET VARIABLE
command could not be directly assigned to a target column because it was part of an expression.
For example: UPDATE
SET c1 = DEFAULT
is allowed, but UPDATE T
SET c1 = ``DEFAULT`` + 1
is not allowed.
Failed to execute <statementType>
command because DEFAULT
values are not supported for target data source with table provider: "<dataSource>
".
The streaming query was reading from an unexpected Delta table (id = '<newTableId>
').
It used to read from another Delta table (id = '<oldTableId>
') according to checkpoint.
This may happen when you changed the code to read from a new table or you deleted and
re-created a table. Please revert your change or delete your streaming query checkpoint
to restart from scratch.
Distinct window functions are not supported: <windowExpr>
.
Division by zero. Use try_divide
to tolerate divisor being 0 and return NULL
instead. If necessary set <config>
to "false" to bypass this error.
For more details see DIVIDE_BY_ZERO
Expectations are only supported within a Delta Live Tables pipeline.
MATERIALIZED
VIEWs with a CLUSTER BY
clause are supported only in a Delta Live Tables pipeline.
<mv>
locations are supported only in a Delta Live Tables pipeline.
<mv>
schemas with a specified type are supported only in a Delta Live Tables pipeline.
CONSTRAINT
clauses in a view are only supported in a Delta Live Tables pipeline.
Cannot drop SCHEDULE
on a table without an existing schedule or trigger.
CTE definition can't have duplicate names: <duplicateNames>
.
Duplicated field names in Arrow Struct are not allowed, got <fieldNames>
.
Duplicate map key <key>
was found, please check the input data.
If you want to remove the duplicated keys, you can set <mapKeyDedupPolicy>
to "LAST_WIN
" so that the key inserted at last takes precedence.
The metric name is not unique: <metricName>
. The same name cannot be used for metrics with different results.
However multiple instances of metrics with with same result and name are allowed (e.g. self-joins).
The columns or variables <nameList>
appear more than once as assignment targets.
Found duplicate clauses: <clauseName>
. Please, remove one of them.
Found duplicate keys <keyColumn>
.
Call to routine <routineName>
is invalid because it includes multiple argument assignments to the same parameter name <parameterName>
.
For more details see DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT
Found duplicate name(s) in the parameter list of the user-defined routine <routineName>
: <names>
.
Found duplicate column(s) in the RETURNS
clause column list of the user-defined routine <routineName>
: <columns>
.
Previous node emitted a row with eventTime=<emittedRowEventTime>
which is older than current_watermark_value=<currentWatermark>
This can lead to correctness issues in the stateful operators downstream in the execution pipeline.
Please correct the operator logic to emit rows after current global watermark value.
Failed to parse an empty string for data type <dataType>
.
Empty local file in staging <operation>
query
The <format>
datasource does not support writing empty or nested empty schemas. Please make sure the data schema has at least one or more column(s).
Not found an encoder of the type <typeName>
to Spark SQL internal representation.
Consider to change the input type to one of supported at '<docroot>
/sql-ref-datatypes.html'.
End label <endLabel>
can not exist without begin label.
Some of partitions in Kafka topic(s) report available offset which is less than end offset during running query with Trigger.AvailableNow. The error could be transient - restart your query, and report if you still see the same issue.
latest offset: <latestOffset>
, end offset: <endOffset>
For Kafka data source with Trigger.AvailableNow, end offset should have lower or equal offset per each topic partition than pre-fetched offset. The error could be transient - restart your query, and report if you still see the same issue.
pre-fetched offset: <prefetchedOffset>
, end offset: <endOffset>
.
Error reading avro data - encountered an unknown fingerprint: <fingerprint>
, not sure what schema to use.
This could happen if you registered additional schemas after starting your spark context.
Cannot query event logs from an Assigned or No Isolation Shared cluster, please use a Shared cluster or a Databricks SQL warehouse instead.
No event logs available for <tableOrPipeline>
. Please try again later after events are generated
The table type of <tableIdentifier>
is <tableType>
.
Querying event logs only supports materialized views, streaming tables, or Delta Live Tables pipelines
The event time <eventName>
has the invalid type <eventType>
, but expected "TIMESTAMP
".
Exceeds char/varchar type length limitation: <limit>
.
EXCEPT
column <columnName>
was resolved and expected to be StructType, but found type <dataType>
.
Columns in an EXCEPT
list must be distinct and non-overlapping, but got (<columns>
).
EXCEPT
columns [<exceptColumns>
] were resolved, but do not match any of the columns [<expandedColumns>
] from the star expansion.
The column/field name <objectName>
in the EXCEPT
clause cannot be resolved. Did you mean one of the following: [<objectList>
]?
Note: nested columns in the EXCEPT
clause may not include qualifiers (table name, parent struct column name, etc.) during a struct expansion; try removing qualifiers if they are used with nested columns.
There is not enough memory to build the broadcast relation <relationClassName>
. Relation Size = <relationSize>
. Total memory used by this task = <taskMemoryUsage>
. Executor Memory Manager Metrics: onHeapExecutionMemoryUsed = <onHeapExecutionMemoryUsed>
, offHeapExecutionMemoryUsed = <offHeapExecutionMemoryUsed>
, onHeapStorageMemoryUsed = <onHeapStorageMemoryUsed>
, offHeapStorageMemoryUsed = <offHeapStorageMemoryUsed>
. [sparkPlanId: <sparkPlanId>
] Disable broadcasts for this query using 'set spark.sql.autoBroadcastJoinThreshold=-1' or using join hint to force shuffle join.
There is not enough memory to store the broadcast relation <relationClassName>
. Relation Size = <relationSize>
. StorageLevel = <storageLevel>
. [sparkPlanId: <sparkPlanId>
] Disable broadcasts for this query using 'set spark.sql.autoBroadcastJoinThreshold=-1' or using join hint to force shuffle join.
The USING
clause of this EXECUTE IMMEDIATE
command contained multiple arguments with same alias (<aliases>
), which is invalid; please update the command to specify unique aliases and then try it again.
'<operation>
' expects a permanent view but <viewName>
is a temp view.
'<operation>
' expects a table but <viewName>
is a view.
For more details see EXPECT_TABLE_NOT_VIEW
The table <tableName>
does not support <operation>
.
For more details see EXPECT_VIEW_NOT_TABLE
Failed to decode a row to a value of the expressions: <expressions>
.
Failed to encode a value of the expressions: <expressions>
to a row.
Column expression <expr>
cannot be sorted because its type <exprType>
is not orderable.
External tables don't support the <scheme>
scheme.
Error running 'REFRESH FOREIGN <scope> <name>
'. Cannot refresh a Fabric <scope>
directly, please use 'REFRESH FOREIGN CATALOG <catalogName>
' to refresh the Fabric Catalog instead.
User defined function (<functionName>
: (<signature>
) => <result>
) failed due to: <reason>
.
Failed preparing of the function <funcName>
for call. Please, double check function's arguments.
Failed JDBC <url>
on the operation:
For more details see FAILED_JDBC
Failed parsing struct: <raw>
.
Error while reading file <path>
.
For more details see FAILED_READ_FILE
Failed to register classes with Kryo.
Failed to rename <sourcePath>
to <targetPath>
as destination already exists.
Failed to rename temp file <srcPath>
to <dstPath>
as FileSystem.rename returned false.
Failed to convert the row value <value>
of the class <class>
to the target SQL type <sqlType>
in the JSON format.
Failed to load routine <routineName>
.
The statement, including potential SQL functions and referenced views, was too complex to parse.
To mitigate this error divide the statement into multiple, less complex chunks.
The feature <featureName>
is not enabled. Consider setting the config <configKey>
to <configValue>
to enable this capability.
<feature>
is not supported on Classic SQL warehouses. To use this feature, use a Pro or Serverless SQL warehouse.
<feature>
is not supported without Unity Catalog. To use this feature, enable Unity Catalog.
<feature>
is not supported in your environment. To use this feature, please contact Databricks Support.
Cannot <op>
column, because <fieldNames>
already exists in <struct>
.
No such struct field <fieldName>
in <fields>
.
File in staging path <path>
already exists but OVERWRITE
is not set
An error occurred in the user provided function in flatMapGroupsWithState. Reason: <reason>
The operation <statement>
is not allowed on the <objectType>
: <objectName>
.
An error occurred in the user provided function in foreach batch sink. Reason: <reason>
An error occurred in the user provided function in foreach sink. Reason: <reason>
Foreign key parent columns <parentColumns>
do not match primary key child columns <childColumns>
.
Cannot execute this command because the foreign <objectType>
name must be non-empty.
Detected multiple data sources with the name '<provider>
'. Please check the data source isn't simultaneously registered and located in the classpath.
from_json inference encountered conflicting schema updates at: <location>
from_json found columnNameOfCorruptRecord (<columnNameOfCorruptRecord>
) present
in a JSON object and can no longer proceed. Please configure a different value for
the option 'columnNameOfCorruptRecord'.
from_json inference could not read the schema stored at: <location>
from_json was unable to infer the schema. Please provide one instead.
from_json inference is only supported when defining streaming tables
from_json configuration is invalid:
For more details see FROM_JSON_INVALID_CONFIGURATION
from_json could not evolve from <old>
to <new>
The function <function>
requires named parameters. Parameters missing names: <exprs>
. Please update the function call to add names for all parameters, e.g., <function>
(param_name => …).
A column cannot have both a default value and a generation expression but column <colName>
has default value: (<defaultValue>
) and generation expression: (<genExpr>
).
Hive 2.2 and lower versions don't support getTablesByType. Please use Hive 2.3 or higher version.
Failed to get warmup tracing. Cause: <cause>
.
Function get_warmup_tracing() not allowed.
Invalid Graphite protocol: <protocol>
.
Graphite sink requires '<property>
' property.
Column of grouping (<grouping>
) can't be found in grouping columns <groupingColumns>
.
Columns of grouping_id (<groupingIdColumn>
) does not match grouping columns (<groupByColumns>
).
Grouping sets size cannot be greater than <maxSize>
.
Aggregate functions are not allowed in GROUP BY
, but found <sqlExpr>
.
For more details see GROUP_BY_AGGREGATE
GROUP BY <index>
refers to an expression <aggExpr>
that contains an aggregate function. Aggregate functions are not allowed in GROUP BY
.
GROUP BY
position <index>
is not in select list (valid range is [1, <size>
]).
The expression <sqlExpr>
cannot be used as a grouping expression because its data type <dataType>
is not an orderable data type.
When attempting to read from HDFS, HTTP request failed.
For more details see HDFS_HTTP_ERROR
Invalid call to <function>
; only valid HLL sketch buffers are supported as inputs (such as those produced by the hll_sketch_agg
function).
Invalid call to <function>
; the lgConfigK
value must be between <min>
and <max>
, inclusive: <value>
.
Sketches have different lgConfigK
values: <left>
and <right>
. Set the allowDifferentLgConfigK
parameter to true to call <function>
with different lgConfigK
values.
An failure occurred when attempting to resolve a query or command with both the legacy fixed-point analyzer as well as the single-pass resolver.
For more details see HYBRID_ANALYZER_EXCEPTION
<identifier>
is not a valid identifier as it has more than 2 name parts.
Duplicated IDENTITY
column sequence generator option: <sequenceGeneratorOption>
.
IDENTITY
column step cannot be 0.
DataType <dataType>
is not supported for IDENTITY
columns.
Illegal input for day of week: <string>
.
Illegal value provided to the State Store
For more details see ILLEGAL_STATE_STORE_VALUE
Connection can't be created due to inappropriate scheme of URI <uri>
provided for the connection option '<option>
'.
Allowed scheme(s): <allowedSchemes>
.
Please add a scheme if it is not present in the URI, or specify a scheme from the allowed values.
Invalid pivot column <columnName>
. Pivot columns must be comparable.
<operator>
can only be performed on tables with compatible column types. The <columnOrdinalNumber>
column of the <tableOrdinalNumber>
table is <dataType1>
type which is not compatible with <dataType2>
at the same column of the first table.<hint>
.
Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: <message>
Cannot write incompatible data for the table <tableName>
:
For more details see INCOMPATIBLE_DATA_FOR_TABLE
The join types <joinType1>
and <joinType2>
are incompatible.
The SQL query of view <viewName>
has an incompatible schema change and column <colName>
cannot be resolved. Expected <expectedNum>
columns named <colName>
but got <actualCols>
.
Please try to re-create the view by running: <suggestion>
.
Incomplete complex type:
For more details see INCOMPLETE_TYPE_DEFINITION
You may get a different result due to the upgrading to
For more details see INCONSISTENT_BEHAVIOR_CROSS_VERSION
<failure>
, <functionName>
requires at least <minArgs>
arguments and at most <maxArgs>
arguments.
Max offset with <rowsPerSecond>
rowsPerSecond is <maxSeconds>
, but 'rampUpTimeSeconds' is <rampUpTimeSeconds>
.
Function called requires knowledge of the collation it should apply, but indeterminate collation was found. Use COLLATE
function to set the collation explicitly.
Cannot create the index <indexName>
on table <tableName>
because it already exists.
Cannot find the index <indexName>
on table <tableName>
.
Trigger type <trigger>
is not supported for this cluster type.
Use a different trigger type e.g. AvailableNow, Once.
Cannot write to <tableName>
, the reason is
For more details see INSERT_COLUMN_ARITY_MISMATCH
Cannot write to '<tableName>
', <reason>
:
Table columns: <tableColumns>
.
Partition columns with static values: <staticPartCols>
.
Data columns: <dataColumns>
.
Insufficient privileges:
<report>
User <user>
has insufficient privileges for external location <location>
.
There is no owner for <securableName>
. Ask your administrator to set an owner.
User does not own <securableName>
.
User does not have permission <action>
on <securableName>
.
The owner of <securableName>
is different from the owner of <parentSecurableName>
.
Storage credential <credentialName>
has insufficient privileges.
User cannot <action>
on <securableName>
because of permissions on underlying securables.
User cannot <action>
on <securableName>
because of permissions on underlying securables:
<underlyingReport>
Integer overflow while operating with intervals.
For more details see INTERVAL_ARITHMETIC_OVERFLOW
Division by zero. Use try_divide
to tolerate divisor being 0 and return NULL
instead.
The FILTER
expression <filterExpr>
in an aggregate function is invalid.
For more details see INVALID_AGGREGATE_FILTER
The index <indexValue>
is out of bounds. The array has <arraySize>
elements. Use the SQL function get()
to tolerate accessing element at invalid index and return NULL
instead. If necessary set <ansiConfig>
to "false" to bypass this error.
For more details see INVALID_ARRAY_INDEX
The index <indexValue>
is out of bounds. The array has <arraySize>
elements. Use try_element_at
to tolerate accessing element at invalid index and return NULL
instead. If necessary set <ansiConfig>
to "false" to bypass this error.
For more details see INVALID_ARRAY_INDEX_IN_ELEMENT_AT
Syntax error in the attribute name: <name>
. Check that backticks appear in pairs, a quoted string is a complete name part and use a backtick only inside quoted name parts.
The 0-indexed bitmap position <bitPosition>
is out of bounds. The bitmap has <bitmapNumBits>
bits (<bitmapNumBytes>
bytes).
Boolean statement is expected in the condition, but <invalidStatement>
was found.
The boundary <boundary>
is invalid: <invalidValue>
.
For more details see INVALID_BOUNDARY
Cannot use <type>
for bucket column. Collated data types are not supported for bucketing.
Invalid bucket file: <path>
.
The expected format is ByteString, but was <unsupported> (<class>
).
The datasource <datasource>
cannot save the column <columnName>
because its name contains some characters that are not allowed in file paths. Please, use an alias to rename it.
Column or field <name>
is of type <type>
while it's required to be <expectedType>
.
The value '<confValue>
' in the config "<confName>
" is invalid.
For more details see INVALID_CONF_VALUE
The column <columnName>
for corrupt records must have the nullable STRING
type, but got <actualType>
.
current_recipient
function can only be used in the CREATE VIEW
statement or the ALTER VIEW
statement to define a share only view in Unity Catalog.
The cursor is invalid.
For more details see INVALID_CURSOR
Unrecognized datetime pattern: <pattern>
.
For more details see INVALID_DATETIME_PATTERN
Failed to execute <statement>
command because the destination column or variable <colName>
has a DEFAULT
value <defaultValue>
,
For more details see INVALID_DEFAULT_VALUE
Invalid value for delimiter.
For more details see INVALID_DELIMITER_VALUE
Destination catalog of the SYNC
command must be within Unity Catalog. Found <catalog>
.
System memory <systemMemory>
must be at least <minSystemMemory>
.
Please increase heap size using the -driver-memory option or "<config>
" in Spark configuration.
Options passed <option_list> are forbidden for foreign table <table_name>.
The location name cannot be empty string, but <location>
was given.
Found an invalid escape string: <invalidEscape>
. The escape string must contain only one character.
EscapeChar
should be a string literal of length one, but got <sqlExpr>
.
Executor memory <executorMemory>
must be at least <minSystemMemory>
.
Please increase executor memory using the -executor-memory option or "<config>
" in Spark configuration.
Found an invalid expression encoder. Expects an instance of ExpressionEncoder but got <encoderType>
. For more information consult '<docroot>
/api/java/index.html?org/apache/spark/sql/Encoder.html'.
The external type <externalType>
is not valid for the type <type>
at the expression <expr>
.
Can't extract a value from <base>
. Need a complex type [STRUCT
, ARRAY
, MAP
] but got <other>
.
Cannot extract <field>
from <expr>
.
Field name should be a non-null string literal, but it's <extraction>
.
Field name <fieldName>
is invalid: <path>
is not a struct.
The format is invalid: <format>
.
For more details see INVALID_FORMAT
Valid range for seconds is [0, 60] (inclusive), but the provided value is <secAndMicros>
. To avoid this error, use try_make_timestamp
, which returns NULL
on error.
If you do not want to use the session default timestamp version of this function, use try_make_timestamp_ntz
or try_make_timestamp_ltz
.
The handle <handle>
is invalid.
For more details see INVALID_HANDLE
The input parameter: method, value: <paramValue>
is not a valid parameter for http_request because it is not a valid HTTP method.
The input parameter: path, value: <paramValue>
is not a valid parameter for http_request because path traversal is not allowed.
The unquoted identifier <ident>
is invalid and must be back quoted as: <ident>
.
Unquoted identifiers can only contain ASCII
letters ('a' - 'z', 'A' - 'Z'), digits ('0' - '9'), and underbar ('_').
Unquoted identifiers must also not start with a digit.
Different data sources and meta stores may impose additional restrictions on valid identifiers.
The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).
Invalid inline table.
For more details see INVALID_INLINE_TABLE
Error parsing '<input>
' to interval. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format.
For more details see INVALID_INTERVAL_FORMAT
Cannot add an interval to a date because its microseconds part is not 0. If necessary set <ansiConfig>
to "false" to bypass this error.
Invalid inverse distribution function <funcName>
.
For more details see INVALID_INVERSE_DISTRIBUTION_FUNCTION
<fieldName>
is not a valid identifier of Java and cannot be used as field name
<walkedTypePath>
.
Invalid join type in joinWith: <joinType>
.
Failed to convert the JSON string '<invalidType>
' to a data type. Please enter a valid data type.
Collations can only be applied to string types, but the JSON data type is <jsonType>
.
Detected an invalid type of a JSON record while inferring a common schema in the mode <failFastMode>
. Expected a STRUCT
type, but found <invalidType>
.
Cannot convert JSON root field to target Spark type.
Input schema <jsonSchema>
can only contain STRING
as a key type for a MAP
.
The value of the config "<bufferSizeConfKey>
" must be less than 2048 MiB, but got <bufferSizeConfValue>
MiB.
The usage of the label <labelName>
is invalid.
For more details see INVALID_LABEL_USAGE
Invalid lambda function call.
For more details see INVALID_LAMBDA_FUNCTION_CALL
The <joinType>
JOIN with LATERAL
correlation is not allowed because an OUTER
subquery cannot correlate to its join partner. Remove the LATERAL
correlation or use an INNER
JOIN, or LEFT OUTER
JOIN instead.
The limit like expression <expr>
is invalid.
For more details see INVALID_LIMIT_LIKE_EXPRESSION
The provided non absolute path <path>
can not be qualified. Please update the path to be a valid dbfs mount location.
The operator expects a deterministic expression, but the actual expression is <sqlExprs>
.
Numeric literal <rawStrippedQualifier>
is outside the valid range for <typeName>
with minimum value of <minValue>
and maximum value of <maxValue>
. Please adjust the value accordingly.
Invalid observed metrics.
For more details see INVALID_OBSERVED_METRICS
Invalid options:
For more details see INVALID_OPTIONS
The group aggregate pandas UDF <functionList>
cannot be invoked together with as other, non-pandas aggregate functions.
An invalid parameter mapping was provided:
For more details see INVALID_PARAMETER_MARKER_VALUE
The value of parameter(s) <parameter>
in <functionName>
is invalid:
For more details see INVALID_PARAMETER_VALUE
Cannot use <type>
for partition column.
The partition command is invalid.
For more details see INVALID_PARTITION_OPERATION
Failed to cast value <value>
to data type <dataType>
for partition column <columnName>
. Ensure the value matches the expected data type for this partition column.
Pipeline id <pipelineId>
is not valid.
A pipeline id should be a UUID in the format of 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
Privilege <privilege>
is not valid for <securable>
.
<key>
is an invalid property key, please use quotes, e.g. SET <key>=<value>
.
<value>
is an invalid property value, please use quotes, e.g. SET <key>=<value>
The column name <columnName>
is invalid because it is not qualified with a table name or consists of more than 4 name parts.
Parameterized query must either use positional, or named parameters, but not both.
Could not perform regexp_replace for source = "<source>
", pattern = "<pattern>
", replacement = "<replacement>
" and position = <position>
.
Expected format is 'RESET
' or 'RESET
key'. If you want to include special characters in key, please use quotes, e.g., RESET key
.
COPY INTO
credentials must include AWS_ACCESS_KEY
, AWS_SECRET_KEY
, and AWS_SESSION_TOKEN
.
The specified save mode <mode>
is invalid. Valid save modes include "append", "overwrite", "ignore", "error", "errorifexists", and "default".
The input schema <inputSchema>
is not a valid schema string.
For more details see INVALID_SCHEMA
<name>
is not a valid name for tables/schemas. Valid names only contain alphabet characters, numbers and _.
Unity catalog does not support <name>
as the default file scheme.
Invalid secret lookup:
For more details see INVALID_SECRET_LOOKUP
Expected format is 'SET
', 'SET
key', or 'SET
key=value'. If you want to include special characters in key, or include semicolon in value, please use backquotes, e.g., SET key
=value
.
The <sharedObjectType>
alias name must be of the form "schema.name".
The singleVariantColumn
option cannot be used if there is also a user specified schema.
Source catalog must not be within Unity Catalog for the SYNC
command. Found <catalog>
.
The argument <name>
of sql()
is invalid. Consider to replace it either by a SQL literal or by collection constructor functions such as map()
, array()
, struct()
.
Invalid SQL syntax:
For more details see INVALID_SQL_SYNTAX
Invalid staging path in staging <operation>
query: <path>
The INTO
clause of EXECUTE IMMEDIATE
is only valid for queries but the given statement is not a query: <sqlString>
.
The statement or clause: <operation>
is not valid.
Invalid subquery:
For more details see INVALID_SUBQUERY_EXPRESSION
Cannot create the persistent object <objName>
of the type <obj>
because it references to the temporary object <tempObjName>
of the type <tempObj>
. Please make the temporary object <tempObjName>
persistent, or make the persistent object <objName>
temporary.
The provided timestamp <timestamp>
doesn't match the expected syntax <format>
.
The timezone: <timeZone>
is invalid. The timezone must be either a region-based zone ID or a zone offset. Region IDs must have the form 'area/city', such as 'America/Los_Angeles'. Zone offsets must be in the format '(+|-)HH', '(+|-)HH:mm' or '(+|-)HH:mm:ss', e.g '-08' , '+01:00' or '-13:33:33', and must be in the range from -18:00 to +18:00. 'Z' and 'UTC' are accepted as synonyms for '+00:00'.
Cannot specify both version and timestamp when time travelling the table.
The time travel timestamp expression <expr>
is invalid.
For more details see INVALID_TIME_TRAVEL_TIMESTAMP_EXPR
The value of the typed literal <valueType>
is invalid: <value>
.
Function <funcName>
does not implement a ScalarFunction or AggregateFunction.
<command> <supportedOrNot>
the source table is in Hive Metastore and the destination table is in Unity Catalog.
The url is invalid: <url>
. Use try_parse_url
to tolerate invalid URL and return NULL
instead.
Invalid usage of <elem>
in <prettyName>
.
Invalid UTF8 byte sequence found in string: <str>
.
Input <uuidInput>
is not a valid UUID.
The UUID should be in the format of 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
Please check the format of the UUID.
Invalid variable declaration.
For more details see INVALID_VARIABLE_DECLARATION
Variable type must be string type but got <varType>
.
The variant value <value>
cannot be cast into <dataType>
. Please use try_variant_get
instead.
Invalid variant.
For more details see INVALID_VARIANT_FROM_PARQUET
The path <path>
is not a valid variant extraction path in <functionName>
.
A valid path should start with $
and is followed by zero or more segments like [123]
, .name
, ['name']
, or ["name"]
.
The schema <schema>
is not a valid variant shredding schema.
The WHERE
condition <condition>
contains invalid expressions: <expressionList>
.
Rewrite the query to avoid window functions, aggregate functions, and generator functions in the WHERE
clause.
Cannot specify ORDER BY
or a window frame for <aggFunc>
.
The data source writer has generated an invalid number of commit messages. Expected exactly one writer commit message from each task, but received <detail>
.
The requested write distribution is invalid.
For more details see INVALID_WRITE_DISTRIBUTION
Failed to execute <command>
.
The join condition <joinCondition>
has the invalid type <conditionType>
, expected "BOOLEAN
".
Some data may have been lost because they are not available in Kafka any more;
either the data was aged out by Kafka or the topic may have been deleted before all the data in the
topic was processed.
If you don't want your streaming query to fail on such cases, set the source option failOnDataLoss to false.
Reason:
For more details see KAFKA_DATA_LOSS
Could not read until the desired sequence number <endSeqNum>
for shard <shardId>
in
kinesis stream <stream>
with consumer mode <consumerMode>
. The query will fail due to
potential data loss. The last read record was at sequence number <lastSeqNum>
.
This can happen if the data with endSeqNum has already been aged out, or the Kinesis stream was
deleted and reconstructed with the same name. The failure behavior can be overridden
by setting spark.databricks.kinesis.failOnDataLoss to false in spark configuration.
For kinesis stream <streamId>
, the previously registered EFO consumer <consumerId>
of the stream has been deleted.
Restart the query so that a new consumer will be registered.
For shard <shard>
, the previous call of subscribeToShard API was within the 5 seconds of the next call.
Restart the query after 5 seconds or more.
The minimum fetched shardId from Kinesis (<fetchedShardId>
)
is less than the minimum tracked shardId (<trackedShardId>
).
This is unexpected and occurs when a Kinesis stream is deleted and recreated with the same name,
and a streaming query using this Kinesis stream is restarted using an existing checkpoint location.
Restart the streaming query with a new checkpoint location, or create a stream with a new name.
Kinesis polling mode is unsupported.
For shard <shard>
, the last record read from Kinesis in previous fetches has sequence number <lastSeqNum>
,
which is greater than the record read in current fetch with sequence number <recordSeqNum>
.
This is unexpected and can happen when the start position of retry or next fetch is incorrectly initialized, and may result in duplicate records downstream.
To read from Kinesis Streams with consumer configurations (consumerName
, consumerNamePrefix
, or registeredConsumerId
), consumerMode
must be efo
.
To read from Kinesis Streams with registered consumers, you must specify both the registeredConsumerId
and registeredConsumerIdType
options.
To read from Kinesis Streams, you must configure either (but not both) of the streamName
or streamARN
options as a comma-separated list of stream names/ARNs.
To read from Kinesis Streams with registered consumers, do not configure consumerName
or consumerNamePrefix
options as they will not take effect.
The number of registered consumer ids should be equal to the number of Kinesis streams but got <numConsumerIds>
consumer ids and <numStreams>
streams.
The registered consumer <consumerId>
provided cannot be found for streamARN <streamARN>
. Verify that you have registered the consumer or do not provide the registeredConsumerId
option.
The registered consumer type <consumerType>
is invalid. It must be either name
or ARN
.
Kryo serialization failed: <exceptionMsg>
. To avoid this, increase "<bufferSizeConfKey>
" value.
Begin label <beginLabel>
does not match the end label <endLabel>
.
The label <label>
already exists. Choose another name or rename the existing label.
LOAD DATA input path does not exist: <path>
.
LOCAL
must be used together with the schema of file
, but got: <actualSchema>
.
Cannot name the managed table as <identifier>
, as its associated location <location>
already exists. Please pick a different table name, or remove the existing location first.
Some of partitions in Kafka topic(s) have been lost during running query with Trigger.AvailableNow. The error could be transient - restart your query, and report if you still see the same issue.
topic-partitions for latest offset: <tpsForLatestOffset>
, topic-partitions for end offset: <tpsForEndOffset>
Malformed Avro messages are detected in message deserialization. Parse Mode: <mode>
. To process malformed Avro message as null result, try setting the option 'mode' as 'PERMISSIVE
'.
Invalid value found when performing <function>
with <charset>
Malformed CSV record: <badRecord>
Malformed records are detected in record parsing: <badRecord>
.
Parse Mode: <failFastMode>
. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE
'.
For more details see MALFORMED_RECORD_IN_PARSING
Variant binary is malformed. Please check the data source is valid.
Create managed table with storage credential is not supported.
Cannot <refreshType>
the materialized view because it predates having a pipelineId. To enable <refreshType>
please drop and recreate the materialized view.
The materialized view operation <operation>
is not allowed:
For more details see MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED
Output expression <expression>
in a materialized view must be explicitly aliased.
materialized view <name>
could not be created with streaming query. Please use CREATE
[OR REFRESH
] <st>
or remove the STREAM
keyword to your FROM
clause to turn this relation into a batch query instead.
Operation <operation>
is not supported on materialized views for this version.
Cannot create the new variable <variableName>
because the number of variables in the session exceeds the maximum allowed number (<maxNumVariables>
).
maxRecordsPerFetch needs to be a positive integer less than or equal to <kinesisRecordLimit>
The ON
search condition of the MERGE
statement matched a single row from the target table with multiple rows of the source table.
This could result in the target row being operated on more than once with an update or delete operation and is not allowed.
There must be at least one WHEN
clause in a MERGE
statement.
METRIC CONSTRAINT
is not enabled.
Provided value "<argValue>
" is not supported by argument "<argName>
" for the METRIC_STORE
table function.
For more details see METRIC_STORE_INVALID_ARGUMENT_VALUE_ERROR
Metric Store routine <routineName>
is currently disabled in this environment.
<table>
is not supported for migrating to managed table because it is not a <tableKind>
table.
Kafka data source in Trigger.AvailableNow should provide the same topic partitions in pre-fetched offset to end offset for each microbatch. The error could be transient - restart your query, and report if you still see the same issue.
topic-partitions for pre-fetched offset: <tpsForPrefetched>
, topic-partitions for end offset: <tpsForEndOffset>
.
The non-aggregating expression <expression>
is based on columns which are not participating in the GROUP BY
clause.
Add the columns or the expression to the GROUP BY
, aggregate the expression, or use <expressionAnyValue>
if you do not care which of the values within a group is returned.
For more details see MISSING_AGGREGATION
Missing clause <clauses>
for operation <operation>
. Please add the required clauses.
Connections of type '<connectionType>
' must include the following option(s): <requiredOptions>
.
Database name is not specified in the v1 session catalog. Please ensure to provide a valid database name when interacting with the v1 catalog.
The query does not include a GROUP BY
clause. Add GROUP BY
or turn it into the window functions using OVER clauses.
CHECK
constraint must have a name.
Parameter <parameterName>
is required for Kafka, but is not specified in <functionName>
.
Parameter <parameterName>
is required, but is not specified in <functionName>
.
The operation has timed out, but no timeout duration is configured. To set a processing time-based timeout, use 'GroupState.setTimeoutDuration()' in your 'mapGroupsWithState' or 'flatMapGroupsWithState' operation. For event-time-based timeout, use 'GroupState.setTimeoutTimestamp()' and define a watermark using 'Dataset.withWatermark()'.
Window specification is not defined in the WINDOW
clause for <windowName>
. For more information about WINDOW
clauses, please refer to '<docroot>
/sql-ref-syntax-qry-select-window.html'.
Modifying built-in catalog <catalogName>
is not supported.
Databricks Delta does not support multiple input paths in the load() API.
paths: <pathList>
. To build a single DataFrame by loading
multiple paths from the same Delta table, please load the root path of
the Delta table with the corresponding partition filters. If the multiple paths
are from different Delta tables, please use Dataset's union()/unionByName() APIs
to combine the DataFrames generated by separate load() API calls.
Found at least two matching constraints with the given condition.
<clause1>
and <clause2>
cannot coexist in the same SQL pipe operator using '|>'. Please separate the multiple result clauses into separate pipe operators and then retry the query again.
Cannot specify time travel in both the time travel clause and options.
Detected multiple data sources with the name <provider> (<sourceNames>
). Please specify the fully qualified class name or remove <externalSource>
from the classpath.
The expression <expr>
does not support more than one source.
Not allowed to implement multiple UDF interfaces, UDF class <className>
.
Mutually exclusive clauses or options <clauses>
. Please remove one of these clauses.
The input query expects a <expectedType>
, but the underlying table is a <givenType>
.
Named parameters are not supported for function <functionName>
; please retry the query with positional arguments to the function call instead.
Cannot call function <functionName>
because named argument references are not supported. In this case, the named argument reference was <argument>
.
Cannot call function <functionName>
because named argument references are not enabled here.
In this case, the named argument reference was <argument>
.
Set "spark.sql.allowNamedFunctionArguments" to "true" to turn on feature.
Cannot create namespace <nameSpaceName>
because it already exists.
Choose a different name, drop the existing namespace, or add the IF NOT EXISTS
clause to tolerate pre-existing namespace.
Cannot drop a namespace <nameSpaceNameName>
because it contains objects.
Use DROP NAMESPACE
… CASCADE
to drop the namespace and all its objects.
The namespace <nameSpaceName>
cannot be found. Verify the spelling and correctness of the namespace.
If you did not qualify the name with, verify the current_schema() output, or qualify the name with the correctly.
To tolerate the error on drop use DROP NAMESPACE IF EXISTS
.
Native request failed. requestId: <requestId>
, cloud: <cloud>
, operation: <operation>
request: [https: <https>
, method = <method>
, path = <path>
, params = <params>
, host = <host>
, headers = <headers>
, bodyLen = <bodyLen>
],
error: <error>
Native XML Data Source is not enabled in this cluster.
Found the negative value in <frequencyExpression>
: <negativeValue>
, but expected a positive integral value.
It is not allowed to use an aggregate function in the argument of another aggregate function. Please use the inner aggregate function in a sub-query.
Nested EXECUTE IMMEDIATE
commands are not allowed. Please ensure that the SQL query provided (<sqlString>
) does not contain another EXECUTE IMMEDIATE
command.
Field(s) <nonExistFields>
do(es) not exist. Available fields: <fieldNames>
The function <funcName>
requires the parameter <paramName>
to be a foldable expression of the type <paramType>
, but the actual argument is a non-foldable.
When there are more than one MATCHED
clauses in a MERGE
statement, only the last MATCHED
clause can omit the condition.
When there are more than one NOT MATCHED BY SOURCE
clauses in a MERGE
statement, only the last NOT MATCHED BY SOURCE
clause can omit the condition.
When there are more than one NOT MATCHED [BY TARGET
] clauses in a MERGE
statement, only the last NOT MATCHED [BY TARGET
] clause can omit the condition.
Literal expressions required for pivot values, found <expression>
.
PARTITION
clause cannot contain the non-partition column: <columnName>
.
Window function is not supported in <windowFunc>
(as column <columnName>
) on streaming DataFrames/Datasets.
Structured Streaming only supports time-window aggregation using the WINDOW
function. (window specification: <windowSpec>
)
Not allowed in the FROM
clause:
For more details see NOT_ALLOWED_IN_FROM
Not allowed in the pipe WHERE
clause:
For more details see NOT_ALLOWED_IN_PIPE_OPERATOR_WHERE
The expression <expr>
used for the routine or clause <name>
must be a constant STRING
which is NOT NULL
.
For more details see NOT_A_CONSTANT_STRING
Operation <operation>
is not allowed for <tableIdentWithDB>
because it is not a partitioned table.
<functionName>
appears as a scalar expression here, but the function was defined as a table function. Please update the query to move the function call into the FROM
clause, or redefine <functionName>
as a scalar function instead.
<functionName>
appears as a table function here, but the function was defined as a scalar function. Please update the query to move the function call outside the FROM
clause, or redefine <functionName>
as a table function instead.
NULL
value appeared in non-nullable field: <walkedTypePath>
If the schema is inferred from a Scala tuple/case class, or a Java bean, please try to use scala.Option[_] or other nullable types (such as java.lang.Integer instead of int/scala.Int).
Assigning a NULL
is not allowed here.
For more details see NOT_NULL_CONSTRAINT_VIOLATION
ALTER TABLE ALTER
/CHANGE COLUMN
is not supported for changing <table>
's column <originName>
with type <originType>
to <newName>
with type <newType>
.
<cmd>
is not supported for v2 tables.
<cmd>
is not supported, if you want to enable it, please set "spark.sql.catalogImplementation" to "hive".
Not supported command in JDBC catalog:
For more details see NOT_SUPPORTED_IN_JDBC_CATALOG
<operation>
is not supported on a SQL <endpoint>
.
<operation>
is not supported on serverless compute.
Unresolved encoder expected, but <attr>
was found.
Can't determine the default value for <colName>
since it is not nullable and it has no default value.
No handler for UDAF '<functionName>
'. Use sparkSession.udf.register(…) instead.
df.mergeInto needs to be followed by at least one of whenMatched/whenNotMatched/whenNotMatchedBySource.
SQLSTATE: none assigned
No parent external location was found for path '<path>
'. Please create an external location on one of the parent paths and then retry the query or command again.
Cannot find <catalystFieldPath>
in Protobuf schema.
SQLSTATE: none assigned
No storage location was found for table '<tableId>
' when generating table credentials. Please verify the table type and the table location URL and then retry the query or command again.
Catalog '<catalog>
' was not found. Please verify the catalog name and then retry the query or command again.
SQLSTATE: none assigned
The clean room '<cleanroom>
' does not exist. Please verify that the clean room name is spelled correctly and matches the name of a valid existing clean room and then retry the query or command again.
SQLSTATE: none assigned
The external location '<externalLocation>
' does not exist. Please verify that the external location name is correct and then retry the query or command again.
SQLSTATE: none assigned
The metastore was not found. Please ask your account administrator to assign a metastore to the current workspace and then retry the query or command again.
SQLSTATE: none assigned
The share provider '<providerName>
' does not exist. Please verify the share provider name is spelled correctly and matches the name of a valid existing provider name and then retry the query or command again.
SQLSTATE: none assigned
The recipient '<recipient>
' does not exist. Please verify that the recipient name is spelled correctly and matches the name of a valid existing recipient and then retry the query or command again.
SQLSTATE: none assigned
The share '<share>
' does not exist. Please verify that the share name is spelled correctly and matches the name of a valid existing share and then retry the query or command again.
SQLSTATE: none assigned
The storage credential '<storageCredential>
' does not exist. Please verify that the storage credential name is spelled correctly and matches the name of a valid existing storage credential and then retry the query or command again.
SQLSTATE: none assigned
The user '<userName>
' does not exist. Please verify that the user to whom you grant permission or alter ownership is spelled correctly and matches the name of a valid existing user and then retry the query or command again.
UDF class <className>
doesn't implement any UDF interface.
Column or field <name>
is nullable while it's required to be non-nullable.
Row ID attributes cannot be nullable: <nullableRowIdAttrs>
.
Data source read/write option <option>
cannot have null value.
Cannot use null as map key.
Execute immediate requires a non-null variable as the query string, but the provided variable <varName>
is null.
The value <value>
cannot be interpreted as a numeric since it has more than 38 digits.
For more details see NUMERIC_VALUE_OUT_OF_RANGE
<operator>
can only be performed on inputs with the same number of columns, but the first input has <firstNumColumns>
columns and the <invalidOrdinalNum>
input has <invalidNumColumns>
columns.
Number of given aliases does not match number of output columns.
Function name: <funcName>
; number of aliases: <aliasesNum>
; number of output columns: <outColsNum>
.
No custom identity claim was provided.
Calling function <functionName>
is not supported in this <location>
; <supportedFunctions>
supported here.
SQL operation <operation>
is only supported on Databricks SQL connectors with Unity Catalog support.
Operation has been canceled.
Operation <operation>
requires Unity Catalog enabled.
<plan>
is not supported in read-only session mode.
ORDER BY
position <index>
is not in select list (valid range is [1, <size>
]).
Unable to create a Parquet converter for the data type <dataType>
whose Parquet type is <parquetType>
.
For more details see PARQUET_CONVERSION_FAILURE
Illegal Parquet type: <parquetType>
.
Unrecognized Parquet type: <field>
.
Parquet type not yet supported: <parquetType>
.
Syntax error, unexpected empty statement.
The function <funcName>
doesn't support the <mode>
mode. Acceptable modes are PERMISSIVE
and FAILFAST
.
Syntax error at or near <error> <hint>
.
Cannot ADD or RENAME
TO partition(s) <partitionList>
in table <tableName>
because they already exist.
Choose a different name, drop the existing partition, or add the IF NOT EXISTS
clause to tolerate a pre-existing partition.
The partition(s) <partitionList>
cannot be found in table <tableName>
.
Verify the partition specification and table name.
To tolerate the error on drop use ALTER TABLE
… DROP IF EXISTS PARTITION
.
Partition column <column>
not found in schema <schema>
. Please provide the existing column for partitioning.
Partition location <locationPath>
already exists in table <tableName>
.
Failed to execute the ALTER TABLE SET PARTITION LOCATION
statement, because the
partition location <location>
is not under the table directory <table>
.
To fix it, please set the location of partition to a subdirectory of <table>
.
<action>
is not allowed on table <tableName>
since storing partition metadata is not supported in Unity Catalog.
Number of values (<partitionNumber>
) did not match schema size (<partitionSchemaSize>
): values are <partitionValues>
, schema is <partitionSchema>
, file path is <urlEncodedPath>
.
Please re-materialize the table or contact the owner.
The expression <expression>
must be inside 'partitionedBy'.
Path <outputPath>
already exists. Set mode as "overwrite" to overwrite the existing path.
Path does not exist: <path>
.
Deserializing the Photon protobuf plan requires at least <size>
bytes, which exceeds the
limit of <limit>
bytes. This could be due to a very large plan or the presence of a very
wide schema. Try to simplify the query, remove unnecessary columns, or disable Photon.
The serialized Photon protobuf plan has size <size>
bytes, which exceeds the limit of
<limit>
bytes. The serialized size of data types in the plan is <dataTypeSize>
bytes.
This could be due to a very large plan or the presence of a very wide schema.
Consider rewriting the query to remove unwanted operations and columns or disable Photon.
Non-grouping expression <expr>
is provided as an argument to the |> AGGREGATE
pipe operator but does not contain any aggregate function; please update it to include an aggregate function and then retry the query again.
Aggregate function <expr>
is not allowed when using the pipe operator |> <clause>
clause; please use the pipe operator |> AGGREGATE
clause instead.
Invalid pivot value '<value>
': value data type <valueType>
does not match pivot column data type <pivotType>
.
Procedure <procedureName>
expects <expected>
arguments, but <actual>
were provided.
CREATE PROCEDURE
with an empty routine definition is not allowed.
The parameter <parameterName>
is defined with parameter mode <parameterMode>
. OUT and INOUT
parameter cannot be omitted when invoking a routine and therefore do not support a DEFAULT
expression. To proceed, remove the DEFAULT
clause or change the parameter mode to IN
.
Stored procedure is not supported
Stored procedure is not supported with Hive Metastore. Please use Unity Catalog instead.
Could not find dependency: <dependencyName>
.
Error reading Protobuf descriptor file at path: <filePath>
.
Searching for <field>
in Protobuf schema at <protobufSchema>
gave <matchSize>
matches. Candidates: <matches>
.
Found <field>
in Protobuf schema but there is no match in the SQL schema.
Type mismatch encountered for field: <field>
.
Java classes are not supported for <protobufFunction>
. Contact Databricks Support about alternate options.
Unable to locate Message <messageName>
in Descriptor.
Cannot call the <functionName>
SQL function because the Protobuf data source is not loaded.
Please restart your job or session with the 'spark-protobuf' package loaded, such as by using the -packages argument on the command line, and then retry your query or command again.
Protobuf type not yet supported: <protobufType>
.
Task in pubsub fetch stage cannot be retried. Partition <partitionInfo>
in stage <stageInfo>
, TID <taskId>
.
<key>
cannot be an empty string.
Invalid key type for PubSub dedup: <key>
.
The option <key>
is not supported by PubSub. It can only be used in testing.
Invalid type for <key>
. Expected type of <key>
to be type <type>
.
Invalid read limit on PubSub stream: <limit>
.
Invalid UnsafeRow to decode to PubSubMessageMetadata, the desired proto schema is: <protoSchema>
. The input UnsafeRow might be corrupted: <unsafeRow>
.
Failed to find complete PubSub authentication information.
Could not find required option: <key>
.
Fail to move raw data checkpoint files from <src>
to destination directory: <dest>
.
PubSub stream cannot be started as there is more than one failed fetch: <failedEpochs>
.
<key>
must be within the following bounds (<min>
, <max>
) exclusive of both bounds.
Shared clusters do not support authentication with instance profiles. Provide credentials to the stream directly using .option().
PubSub source connector is only available in cluster with spark.speculation
disabled.
An error occurred while trying to create subscription <subId>
on topic <topicId>
. Please check that there are sufficient permissions to create a subscription and try again.
Unable to parse serialized bytes to generate proto.
getOffset is not supported without supplying a limit.
Failed to <action>
Python data source <type>
: <msg>
Failed when Python streaming data source perform <action>
: <msg>
Unable to access referenced table because a previously assigned column mask is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:
For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY
Unable to access referenced table because a previously assigned row level security policy is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:
For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY
<message>
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE
tableName' command in SQL or by recreating the Dataset/DataFrame involved.
The invocation of function <functionName>
has <parameterName>
and <alternativeName>
set, which are aliases of each other. Please set only one of them.
The function <functionName>
required parameter <parameterName>
must be assigned at position <expectedPos>
without the name.
Only TIMESTAMP
/TIMESTAMP_LTZ
/TIMESTAMP_NTZ
types are supported for recipient expiration timestamp.
Found recursive reference in Protobuf schema, which can not be processed by Spark by default: <fieldDescriptor>
. try setting the option recursive.fields.max.depth
1 to 10. Going beyond 10 levels of recursion is not allowed.
Recursive view <viewIdent>
detected (cycle: <newPath>
).
References to DEFAULT
column values are not allowed within the PARTITION
clause.
Can not build a <relationName>
that is larger than 8G.
The remote HTTP request failed with code <errorCode>
, and error message <errorMessage>
Failed to evaluate the <functionName>
SQL function due to inability to parse the JSON result from the remote HTTP response; the error message is <errorMessage>
. Check API documentation: <docUrl>
. Please fix the problem indicated in the error message and retry the query again.
Failed to evaluate the <functionName>
SQL function due to inability to process the unexpected remote HTTP response; the error message is <errorMessage>
. Check API documentation: <docUrl>
. Please fix the problem indicated in the error message and retry the query again.
The remote request failed after retrying <N>
times; the last failed HTTP error code was <errorCode>
and the message was <errorMessage>
Failed to evaluate the <functionName>
SQL function because <errorMessage>
. Check requirements in <docUrl>
. Please fix the problem indicated in the error message and retry the query again.
Failed to rename as <sourcePath>
was not found.
The <clause>
clause may be used at most once per <operation>
operation.
The routine <routineName>
required parameter <parameterName>
has been assigned at position <positionalIndex>
without the name.
Please update the function call to either remove the named argument with <parameterName>
for this parameter or remove the positional
argument at <positionalIndex>
and then try the query again.
Cannot invoke routine <routineName>
because the parameter named <parameterName>
is required, but the routine call did not supply a value. Please update the routine call to supply an argument value (either positionally at index <index>
or by name) and retry the query again.
<sessionCatalog>
requires a single-part namespace, but got <namespace>
.
The 'rescuedDataColumn' DataFrame API reader option is mutually exclusive with the 'singleVariantColumn' DataFrame API option.
Please remove one of them and then retry the DataFrame operation again.
The write contains reserved columns <columnList>
that are used
internally as metadata for Change Data Feed. To write to the table either rename/drop
these columns or disable Change Data Feed on the table by setting
<config>
to false.
The option <option>
has restricted values on Shared clusters for the <source>
source.
For more details see RESTRICTED_STREAMING_OPTION_PERMISSION_ENFORCED
Cannot create the <newRoutineType> <routineName>
because a <existingRoutineType>
of that name already exists.
Choose a different name, drop or replace the existing <existingRoutineType>
, or add the IF NOT EXISTS
clause to tolerate a pre-existing <newRoutineType>
.
The routine <routineName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP
… IF EXISTS
.
The routine <routineName>
does not support the parameter <parameterName>
specified at position <pos>
.<suggestion>
The function <routineName>
cannot be created because the specified classname '<className>
' is reserved for system use. Please rename the class and try again.
Creating CHECK
constraint on table <tableName>
with row level security policies is not supported.
A <statementType>
statement attempted to assign a row level security policy to a table, but two or more referenced columns had the same name <columnName>
, which is invalid.
Row level security policies for <tableName>
are not supported:
For more details see ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED
Unable to <statementType> <columnName>
from table <tableName>
because it's referenced in a row level security policy. The table owner must remove or alter this policy before proceeding.
MERGE INTO
operations do not support row level security policies in source table <tableName>
.
MERGE INTO
operations do not support writing into table <tableName>
with row level security policies.
This statement attempted to assign a row level security policy to a table, but referenced column <columnName>
had multiple name parts, which is invalid.
Row level security policies are only supported in Unity Catalog.
SHOW PARTITIONS
command is not supported for<format>
tables with row level security policy.
<mode>
clone from table <tableName>
with row level security policy is not supported.
<mode>
clone to table <tableName>
with row level security policy is not supported.
Using a constant as a parameter in a row level security policy is not supported. Please update your SQL command to remove the constant from the row filter definition and then retry the command again.
Failed to execute <statementType>
command because assigning row level security policy is not supported for target data source with table provider: "<provider>
".
More than one row returned by a subquery used as a row.
Found NULL
in a row at the index <index>
, expected a non-NULL
value.
Not found an id for the rule name "<ruleName>
". Please modify RuleIdCollection.scala if you are adding a new rule.
Permissions not supported on sample databases/tables.
ScalarFunction <scalarFunc>
not overrides method 'produceResult(InternalRow)' with custom implementation.
ScalarFunction <scalarFunc>
not implements or overrides method 'produceResult(InternalRow)'.
The correlated scalar subquery '<sqlExpr>
' is neither present in GROUP BY
, nor in an aggregate function.
Add it to GROUP BY
using ordinal position or wrap it in first()
(or first_value
) if you don't care which value you get.
More than one row returned by a subquery used as an expression.
Cannot add <scheduleType>
to a table that already has <existingScheduleType>
. Please drop the existing schedule or use ALTER TABLE
… ALTER <scheduleType>
… to alter it.
The schedule period for <timeUnit>
must be an integer value between 1 and <upperBound>
(inclusive). Received: <actual>
.
Cannot create schema <schemaName>
because it already exists.
Choose a different name, drop the existing schema, or add the IF NOT EXISTS
clause to tolerate pre-existing schema.
Cannot drop a schema <schemaName>
because it contains objects.
Use DROP SCHEMA
… CASCADE
to drop the schema and all its objects.
The schema <schemaName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog.
To tolerate the error on drop use DROP SCHEMA IF EXISTS
.
Schema from schema registry could not be initialized. <reason>
.
The second argument of <functionName>
function needs to be an integer.
Cannot execute <commandType>
command with one or more non-encrypted references to the SECRET
function; please encrypt the result of each such function call with AES_ENCRYPT
and try the command again
The seed expression <seedExpr>
of the expression <exprWithSeed>
must be foldable.
The server is busy and could not handle the request. Please wait a moment and try again.
SHOW COLUMNS
with conflicting namespaces: <namespaceA>
!= <namespaceB>
.
sortBy must be used together with bucketBy.
Job <jobId>
cancelled <reason>
A CREATE TABLE
without explicit column list cannot specify bucketing information.
Please use the form with explicit column list and specify bucketing information.
Alternatively, allow bucketing information to be inferred by omitting the clause.
Cannot specify both CLUSTER BY
and CLUSTERED BY INTO BUCKETS
.
Cannot specify both CLUSTER BY
and PARTITIONED BY
.
A CREATE TABLE
without explicit column list cannot specify PARTITIONED BY
.
Please use the form with explicit column list and specify PARTITIONED BY
.
Alternatively, allow partitioning to be inferred by omitting the PARTITION BY
clause.
The SQL config <sqlConf>
cannot be found. Please verify that the config exists.
Transient error while accessing target staging path <path>
, please try in a few minutes
Star (*) is not allowed in a select list when GROUP BY
an ordinal position is used.
Failed to perform stateful processor operation=<operationType>
with invalid handle state=<handleState>
.
Failed to perform stateful processor operation=<operationType>
with invalid timeMode=<timeMode>
State variable with name <stateVarName>
has already been defined in the StatefulProcessor.
Cannot use TTL for state=<stateName>
in timeMode=<timeMode>
, use TimeMode.ProcessingTime() instead.
TTL duration must be greater than zero for State store operation=<operationType>
on state=<stateName>
.
Unknown time mode <timeMode>
. Accepted timeMode modes are 'none', 'processingTime', 'eventTime'
Failed to create column family with unsupported starting character and name=<colFamilyName>
.
Failed to perform column family operation=<operationName>
with invalid name=<colFamilyName>
. Column family name cannot be empty or include leading/trailing spaces or use the reserved keyword=default
Incompatible schema transformation with column family=<colFamilyName>
, oldSchema=<oldSchema>
, newSchema=<newSchema>
.
The handle has not been initialized for this StatefulProcessor.
Please only use the StatefulProcessor within the transformWithState operator.
Incorrect number of ordering ordinals=<numOrderingCols>
for range scan encoder. The number of ordering ordinals cannot be zero or greater than number of schema columns.
Incorrect number of prefix columns=<numPrefixCols>
for prefix scan encoder. Prefix columns cannot be zero or greater than or equal to num of schema columns.
Cannot change <configName>
from <oldConfig>
to <newConfig>
between restarts. Please set <configName>
to <oldConfig>
, or restart with a new checkpoint directory.
The given State Store Provider <inputClass>
does not extend org.apache.spark.sql.execution.streaming.state.StateStoreProvider.
Cannot change <stateVarName>
to <newType>
between query restarts. Please set <stateVarName>
to <oldType>
, or restart with a new checkpoint directory.
Null type ordering column with name=<fieldName>
at index=<index>
is not supported for range scan encoder.
The given State Store Provider <inputClass>
does not extend org.apache.spark.sql.execution.streaming.state.SupportsFineGrainedReplay.
Therefore, it does not support option snapshotStartBatchId or readChangeFeed in state data source.
State store operation=<operationType>
not supported on missing column family=<colFamilyName>
.
Variable size ordering column with name=<fieldName>
at index=<index>
is not supported for range scan encoder.
Static partition column <staticName>
is also specified in the column list.
No committed batch found, checkpoint location: <checkpointLocation>
. Ensure that the query has run and committed any microbatch before stopping.
The options <options>
cannot be specified together. Please specify the one.
Failed to read the operator metadata for checkpointLocation=<checkpointLocation>
and batchId=<batchId>
.
Either the file does not exist, or the file is corrupted.
Rerun the streaming query to construct the operator metadata, and report to the corresponding communities or vendors if the error persists.
Failed to read the state schema. Either the file does not exist, or the file is corrupted. options: <sourceOptions>
.
Rerun the streaming query to construct the state schema, and report to the corresponding communities or vendors if the error persists.
Invalid value for source option '<optionName>
':
For more details see STDS_INVALID_OPTION_VALUE
The state does not have any partition. Please double check that the query points to the valid state. options: <sourceOptions>
The offset log for <batchId>
does not exist, checkpoint location: <checkpointLocation>
.
Please specify the batch ID which is available for querying - you can query the available batch IDs via using state metadata data source.
Metadata is not available for offset log for <batchId>
, checkpoint location: <checkpointLocation>
.
The checkpoint seems to be only run with older Spark version(s). Run the streaming query with the recent Spark version, so that Spark constructs the state metadata.
'<optionName>
' must be specified.
Adaptive Query Execution is not supported for stateful operators in Structured Streaming.
Cannot stream from materialized view <viewName>
. Streaming from materialized views is not supported.
Invalid streaming output mode: <outputMode>
.
For more details see STREAMING_OUTPUT_MODE
Streaming real-time mode has the following limitation:
For more details see STREAMING_REAL_TIME_MODE
Streaming stateful operator name does not match with the operator in state metadata. This likely to happen when user adds/removes/changes stateful operator of existing streaming query.
Stateful operators in the metadata: [<OpsInMetadataSeq>
]; Stateful operators in current batch: [<OpsInCurBatchSeq>
].
streaming table <tableName>
needs to be refreshed to execute <operation>
.
If the table is created from DBSQL
, please run REFRESH <st>
.
If the table is created by a pipeline in Delta Live Tables, please run a pipeline update.
streaming tables can only be created and refreshed in Delta Live Tables and Databricks SQL Warehouses.
The operation <operation>
is not allowed:
For more details see STREAMING_TABLE_OPERATION_NOT_ALLOWED
streaming table <tableName>
can only be created from a streaming query. Please add the STREAM
keyword to your FROM
clause to turn this relation into a streaming query.
Kinesis stream <streamName>
in <region>
not found.
Please start a new query pointing to the correct stream name.
Input row doesn't have expected number of values required by the schema. <expected>
fields are required while <actual>
values are provided.
The sum of the LIMIT
clause and the OFFSET
clause must not be greater than the maximum 32-bit integer value (2,147,483,647) but found limit = <limit>
, offset = <offset>
.
Repair table sync metadata command is only supported for delta table.
Source table name <srcTable>
must be same as destination table name <destTable>
.
Support of the clause or keyword: <clause>
has been discontinued in this context.
For more details see SYNTAX_DISCONTINUED
Cannot create table or view <relationName>
because it already exists.
Choose a different name, drop the existing object, add the IF NOT EXISTS
clause to tolerate pre-existing objects, add the OR REPLACE
clause to replace the existing materialized view, or add the OR REFRESH
clause to refresh the existing streaming table.
The table or view <relationName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS
or DROP TABLE IF EXISTS
.
For more details see TABLE_OR_VIEW_NOT_FOUND
Cannot <action>
SQL user-defined function <functionName>
with TABLE
arguments because this functionality is not yet implemented.
Failed to analyze the Python user defined table function: <msg>
Failed to evaluate the table function <functionName>
because its table metadata <requestedMetadata>
, but the function call <invalidFunctionCallProperty>
.
Failed to evaluate the table function <functionName>
because its table metadata was invalid; <reason>
.
There are too many table arguments for table-valued function.
It allows one table argument, but got: <num>
.
If you want to allow it, please set "spark.sql.allowMultipleTableArguments.enabled" to "true"
Table with ID <tableId>
cannot be found. Verify the correctness of the UUID.
Task failed while writing rows to <path>
.
Cannot create the temporary view <relationName>
because it already exists.
Choose a different name, drop or replace the existing view, or add the IF NOT EXISTS
clause to tolerate pre-existing views.
CREATE TEMPORARY VIEW
or the corresponding Dataset APIs only accept single-part view names, but got: <actualName>
.
Trailing comma detected in SELECT
clause. Remove the trailing comma before the FROM
clause.
The trigger interval must be a positive duration that can be converted into whole seconds. Received: <actual>
seconds.
Bucketed tables are not supported in Unity Catalog.
For Unity Catalog, please specify the catalog name explicitly. E.g. SHOW GRANT your.address@email.com ON CATALOG
main.
The command(s): <commandName>
are not supported in Unity Catalog.
For more details see UC_COMMAND_NOT_SUPPORTED
The command(s): <commandName>
are not supported for Unity Catalog clusters in serverless. Use single user or shared clusters instead.
The command(s): <commandName>
are not supported for Unity Catalog clusters in shared access mode. Use single user access mode instead.
The specified credential kind is not supported.
Data source format <dataSourceFormatName>
is not supported in Unity Catalog.
Data source options are not supported in Unity Catalog.
LOCATION
clause must be present for external volume. Please check the syntax 'CREATE EXTERNAL VOLUME
… LOCATION
…' for creating an external volume.
The query failed because it attempted to refer to table <tableName>
but was unable to do so: <failureReason>
. Please update the table <tableName>
to ensure it is in an Active provisioning state and then retry the query again.
Creating table in Unity Catalog with file scheme <schemeName>
is not supported.
Instead, please create a federated data source connection using the CREATE CONNECTION
command for the same table provider, then create a catalog based on the connection with a CREATE FOREIGN CATALOG
command to reference the tables therein.
Hive Metastore Federation view does not support dependencies across multiple catalogs. View <view>
in Hive Metastore Federation catalog must use dependency from hive_metastore or spark_catalog catalog but its dependency <dependency>
is in another catalog <referencedCatalog>
. Please update the dependencies to satisfy this constraint and then retry your query or command again.
Hive Metastore federation is not enabled on this cluster.
Accessing the catalog <catalogName>
is not supported on this cluster
Dependencies of <viewName>
are recorded as <storedDeps>
while being parsed as <parsedDeps>
. This likely occurred through improper use of a non-SQL API. You can repair dependencies in Databricks Runtime by running ALTER VIEW <viewName>
AS <viewText>
.
Nested or empty namespaces are not supported in Unity Catalog.
Non-Unity-Catalog object <name>
can't be referenced in Unity Catalog objects.
Unity Catalog Lakehouse Federation write support is not enabled for provider <provider>
on this cluster.
Managed volume does not accept LOCATION
clause. Please check the syntax 'CREATE VOLUME
…' for creating a managed volume.
Unity Catalog is not enabled on this cluster.
Unity Catalog Query Federation is not enabled on this cluster.
Service credentials are not enabled on this cluster.
Support for Unity Catalog Volumes is not enabled on this instance.
Support for Volume Sharing is not enabled on this instance.
Volume <name>
does not exist. Please use 'SHOW VOLUMES
' to list available volumes.
SQLSTATE: none assigned
Execution of function <fn>
failed
For more details see UDF_ERROR
One or more UDF limits were breached.
For more details see UDF_LIMITS
Exceeded query-wide UDF limit of <maxNumUdfs>
UDFs (limited during public preview). Found <numUdfs>
. The UDFs were: <udfNames>
.
Python worker exited unexpectedly
For more details see UDF_PYSPARK_ERROR
PySpark UDF <udf> (<eval-type>
) is not supported on clusters in Shared access mode.
Execution failed.
For more details see UDF_PYSPARK_USER_CODE_ERROR
Parameter default value is not supported for user-defined <functionType>
function.
Execution of function <fn>
failed.
For more details see UDF_USER_CODE_ERROR
The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF.
Expected <aliasesSize>
aliases, but got <aliasesNames>
.
Please ensure that the number of aliases provided matches the number of columns output by the UDTF.
Failed to evaluate the user-defined table function because its 'analyze' method returned a requested OrderingColumn whose column name expression included an unnecessary alias <aliasName>
; please remove this alias and then try the query again.
Failed to evaluate the user-defined table function because its 'analyze' method returned a requested 'select' expression (<expression>
) that does not include a corresponding alias; please update the UDTF to specify an alias there and then try the query again.
Unable to acquire <requestedBytes>
bytes of memory, got <receivedBytes>
.
Unable to convert SQL type <toType>
to Protobuf type <protobufType>
.
Unable to fetch tables of Hive database: <dbName>
. Error Class Name: <className>
.
Unable to infer schema for <format>
. It must be specified manually.
Unauthorized access:
<report>
Found the unbound parameter: <name>
. Please, fix args
and provide a mapping of the parameter to either a SQL literal or collection constructor functions such as map()
, array()
, struct()
.
Found an unclosed bracketed comment. Please, append */ at the end of the comment.
Parameter <paramIndex>
of function <functionName>
requires the <requiredType>
type, however <inputSql>
has the type <inputType>
.
The <namedParamKey>
parameter of function <functionName>
requires the <requiredType>
type, however <inputSql>
has the type <inputType>
.<hint>
Unexpected operator <op>
in the CREATE VIEW
statement as a streaming source.
A streaming view query must consist only of SELECT
, WHERE
, and UNION ALL
operations.
Cannot invoke routine <routineName>
because it contains positional argument(s) following the named argument assigned to <parameterName>
; please rearrange them so the positional arguments come first and then retry the query again.
The class <className>
has an unexpected expression serializer. Expects "STRUCT
" or "IF
" which returns "STRUCT
" but found <expr>
.
Encountered <changeType>
during parsing: <unknownFieldBlob>
, which can be fixed by an automatic retry: <isRetryable>
For more details see UNKNOWN_FIELD_EXCEPTION
The invocation of routine <routineName>
contains an unknown positional argument <sqlExpr>
at position <pos>
. This is invalid.
Unknown primitive type with id <id>
was found in a variant value.
Attempting to treat <descriptorName>
as a Message, but it was <containingType>
.
UNPIVOT
requires all given <given>
expressions to be columns when no <empty>
expressions are given. These are not columns: [<expressions>
].
At least one value column needs to be specified for UNPIVOT
, all columns specified as ids.
Unpivot value columns must share a least common type, some types do not: [<types>
].
All unpivot value columns must have the same size as there are value column names (<names>
).
Cannot invoke routine <routineName>
because the routine call included a named argument reference for the argument named <argumentName>
, but this routine does not include any signature containing an argument with this name. Did you mean one of the following? [<proposal>
].
Unrecognized SQL type - name: <typeName>
, id: <jdbcType>
.
The statistic <stats>
is not recognized. Valid statistics include count
, count_distinct
, approx_count_distinct
, mean
, stddev
, min
, max
, and percentile values. Percentile must be a numeric value followed by '%', within the range 0% to 100%.
Could not resolve <name>
to a table-valued function.
Please make sure that <name>
is defined as a table-valued function and that all required parameters are provided correctly.
If <name>
is not defined, please create the table-valued function before using it.
For more information about defining table-valued functions, please refer to the Apache Spark documentation.
Cannot infer grouping columns for GROUP BY ALL
based on the select clause. Please explicitly specify the grouping columns.
A column, variable, or function parameter with name <objectName>
cannot be resolved.
For more details see UNRESOLVED_COLUMN
A field with name <fieldName>
cannot be resolved with the struct-type column <columnPath>
.
For more details see UNRESOLVED_FIELD
Cannot resolve column <objectName>
as a map key. If the key is a string literal, add the single quotes '' around it.
For more details see UNRESOLVED_MAP_KEY
Cannot resolve routine <routineName>
on search path <searchPath>
.
For more details see UNRESOLVED_ROUTINE
USING
column <colName>
cannot be resolved on the <side>
side of the join. The <side>
-side columns: [<suggestion>
].
Cannot resolve variable <variableName>
on search path <searchPath>
.
Unstructured file format <format>
is not supported. Supported file formats are <supportedFormats>
.
Please update the format
from your <expr>
expression to one of the supported formats and then retry the query again.
Unstructured model <model>
is not supported. Supported models are <supportedModels>
.
Please switch to one of the supported models and then retry the query again.
Don't support add file.
For more details see UNSUPPORTED_ADD_FILE
Unsupported arrow type <typeName>
.
The function <funcName>
does not support batch queries.
Cannot call the method "<methodName>
" of the class "<className>
".
For more details see UNSUPPORTED_CALL
The char/varchar type can't be used in the table schema.
If you want Spark treat them as string type as same as Spark 3.0 and earlier, please set "spark.sql.legacy.charVarcharAsString" to "true".
The <clause>
is not supported for <operation>
.
Collation <collationName>
is not supported for:
For more details see UNSUPPORTED_COLLATION
The common ancestor of source path and sourceArchiveDir should be registered with UC.
If you see this error message, it's likely that you register the source path and sourceArchiveDir in different external locations.
Please put them into a single external location.
Constraint clauses <clauses>
are unsupported.
Unsupported constraint type. Only <supportedConstraintTypes>
are supported
Unsupported data source type for direct query on files: <dataSourceType>
Unsupported data type <typeName>
.
The data source "<source>
" cannot be written in the <createMode>
mode. Please use either the "Append" or "Overwrite" mode instead.
The <format>
datasource doesn't support the column <columnName>
of the type <columnType>
.
Cannot create encoder for <dataType>
. Please use a different output data type for your UDF or DataFrame.
DEFAULT
column values is not supported.
For more details see UNSUPPORTED_DEFAULT_VALUE
The deserializer is not supported:
For more details see UNSUPPORTED_DESERIALIZER
Cannot create generated column <fieldName>
with generation expression <expressionStr>
because <reason>
.
A query operator contains one or more unsupported expressions.
Consider to rewrite it to avoid window functions, aggregate functions, and generator functions in the WHERE
clause.
Invalid expressions: [<invalidExprSqls>
]
A query parameter contains unsupported expression.
Parameters can either be variables or literals.
Invalid expression: [<invalidExprSql>
]
Expression <sqlExpr>
not supported within a window function.
The feature is not supported:
For more details see UNSUPPORTED_FEATURE
Unsupported user defined function type: <language>
The generator is not supported:
For more details see UNSUPPORTED_GENERATOR
grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup.
<trigger>
with initial position <initialPosition>
is not supported with the Kinesis source
Can't insert into the target.
For more details see UNSUPPORTED_INSERT
Unsupported join type '<typ>
'. Supported join types include: <supported>
.
Creating a managed table <tableName>
using datasource <dataSource>
is not supported. You need to use datasource DELTA
or create an external table using CREATE EXTERNAL TABLE <tableName>
… USING <dataSource>
…
MERGE
operation contains unsupported <condName>
condition.
For more details see UNSUPPORTED_MERGE_CONDITION
The current metric view usage is not supported.
For more details see UNSUPPORTED_METRIC_VIEW_USAGE
Table <tableName>
has a row level security policy or column mask which indirectly refers to another table with a row level security policy or column mask; this is not supported. Call sequence: <callSequence>
Can't overwrite the target that is also being read from.
For more details see UNSUPPORTED_OVERWRITE
Unsupported partition transform: <transform>
. The supported transforms are identity
, bucket
, and clusterBy
. Ensure your transform expression uses one of these.
The save mode <saveMode>
is not supported for:
For more details see UNSUPPORTED_SAVE_MODE
Unsupported a SHOW CREATE TABLE
command.
For more details see UNSUPPORTED_SHOW_CREATE_TABLE
The single-pass analyzer cannot process this query or command because it does not yet support <feature>
.
<outputMode>
output mode not supported for <statefulOperator>
on streaming DataFrames/DataSets without watermark.
Unsupported for streaming a view. Reason:
For more details see UNSUPPORTED_STREAMING_OPTIONS_FOR_VIEW
Streaming options <options>
are not supported for data source <source>
on a shared cluster. Please confirm that the options are specified and spelled correctly, and checkhttps://docs.databricks.com/en/compute/access-mode-limitations.html#streaming-limitations-and-requirements-for-unity-catalog-shared-access-mode for limitations.
Data source <sink>
is not supported as a streaming sink on a shared cluster.
Data source <source>
is not supported as a streaming source on a shared cluster.
The function <funcName>
does not support streaming. Please remove the STREAM
keyword
<streamReadLimit>
is not supported with the Kinesis source
Unsupported subquery expression:
For more details see UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY
Creating primary key with timeseries columns is not supported
Creating primary key with more than one timeseries column <colSeq>
is not supported
<trigger>
is not supported with the Kinesis source
Literals of the type <unsupportedType>
are not supported. Supported types are <supportedTypes>
.
The function <function>
uses the following feature(s) that require a newer version of Databricks runtime: <features>
. Please consult <docLink>
for details.
You're using untyped Scala UDF, which does not have the input type information.
Spark may blindly pass null to the Scala closure with primitive-type argument, and the closure will see the default value of the Java type for the null argument, e.g. udf((x: Int) => x, IntegerType)
, the result is 0 for null input. To get rid of this error, you could:
- use typed Scala UDF APIs(without return type parameter), e.g.
udf((x: Int) => x)
. - use Java UDF APIs, e.g.
udf(new UDF1[String, Integer] { override def call(s: String): Integer = s.length() }, IntegerType)
, if input types are all non primitive. - set "spark.sql.legacy.allowUntypedScalaUDF" to "true" and use this API with caution.
Table is not eligible for upgrade from Hive Metastore to Unity Catalog. Reason:
For more details see UPGRADE_NOT_SUPPORTED
User defined function is invalid:
For more details see USER_DEFINED_FUNCTIONS
<errorMessage>
The raise_error()
function was used to raise error class: <errorClass>
which expects parameters: <expectedParms>
.
The provided parameters <providedParms>
do not match the expected parameters.
Please make sure to provide all expected parameters.
The raise_error()
function was used to raise an unknown error class: <errorClass>
Cannot create the variable <variableName>
because it already exists.
Choose a different name, or drop or replace the existing variable.
The variable <variableName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VARIABLE IF EXISTS
.
Cannot construct a Variant larger than 16 MiB. The maximum allowed size of a Variant value is 16 MiB.
Failed to build variant because of a duplicate object key <key>
.
Cannot build variant bigger than <sizeLimit>
in <functionName>
.
Please avoid large input strings to this expression (for example, add function calls(s) to check the expression size and convert it to NULL
first if it is too big).
Cannot create view <relationName>
because it already exists.
Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS
clause to tolerate pre-existing objects.
The depth of view <viewName>
exceeds the maximum view resolution depth (<maxNestedDepth>
).
Analysis is aborted to avoid errors. If you want to work around this, please try to increase the value of "spark.sql.view.maxNestedViewDepth".
The view <relationName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS
.
Cannot create volume <relationName>
because it already exists.
Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS
clause to tolerate pre-existing objects.
<funcName>
function can only be evaluated in an ordered row-based window frame with a single offset: <windowExpr>
.
Window function <funcName>
requires an OVER clause.
WITH CREDENTIAL
syntax is not supported for <type>
.
writeStream
can be called only on streaming Dataset/DataFrame.
Failed to execute the command because DEFAULT
values are not supported when adding new
columns to previously existing Delta tables; please add the column without a default
value first, then run a second ALTER TABLE ALTER COLUMN SET DEFAULT
command to apply
for future inserted rows instead.
Failed to execute <commandType>
command because it assigned a column DEFAULT
value,
but the corresponding table feature was not enabled. Please retry the command again
after executing ALTER TABLE
tableName SET
TBLPROPERTIES
('delta.feature.allowColumnDefaults' = 'supported').
The operation <operation>
requires a <requiredType>
. But <objectName>
is a <foundType>
. Use <alternative>
instead.
The <functionName>
requires <expectedNum>
parameters but the actual number is <actualNum>
.
For more details see WRONG_NUM_ARGS
<rowTag>
option is required for reading files in XML format.
XML doesn't support <innerDataType>
as inner type of <dataType>
. Please wrap the <innerDataType>
within a StructType field when using it inside <dataType>
.
Rescued data and wildcard column cannot be simultaneously enabled. Remove the wildcardColumnName option.
ZOrderBy column <columnName>
doesn't exist.
Could not find active SparkSession
Cannot set a new txn as active when one is already active
Failed to add column <colName>
because the name is reserved.
The current operation attempted to add a deletion vector to a table that does not permit the creation of new deletion vectors. Please file a bug report.
All operations that add deletion vectors should set the tightBounds column in statistics to false. Please file a bug report.
Index <columnIndex>
to add column <columnName>
is lower than 0
Cannot add <columnName>
because its parent is not a StructType. Found <other>
Struct not found at position <position>
Please use ALTER TABLE
ADD CONSTRAINT
to add CHECK
constraints.
Found <sqlExpr>
. A generated column cannot use an aggregate expression
Aggregate functions are not supported in the <operation> <predicate>
.
Failed to change the collation of column <column>
because it has a bloom filter index. Please either retain the existing collation or else drop the bloom filter index and then retry the command again to change the collation.
Failed to change the collation of column <column>
because it is a clustering column. Please either retain the existing collation or else change the column to a non-clustering column with an ALTER TABLE
command and then retry the command again to change the collation.
ALTER TABLE CHANGE COLUMN
is not supported for changing column <currentType>
to <newType>
ALTER TABLE CLUSTER BY
is supported only for Delta table with Liquid clustering.
ALTER TABLE CLUSTER BY
cannot be applied to a partitioned table.
Operation not allowed: ALTER TABLE RENAME
TO is not allowed for managed Delta tables on S3, as eventual consistency on S3 may corrupt the Delta transaction log. If you insist on doing so and are sure that there has never been a Delta table with the new name <newName>
before, you can enable this by setting <key>
to be true.
Cannot enable <tableFeature>
table feature using ALTER TABLE SET TBLPROPERTIES
. Please use CREATE
OR REPLACE TABLE CLUSTER BY
to create a Delta table with clustering.
Cannot change data type of <column>
from <from>
to <to>
. This change contains column removals and additions, therefore they are ambiguous. Please make these changes individually using ALTER TABLE
[ADD | DROP | RENAME
] COLUMN
.
Ambiguous partition column <column>
can be <colMatches>
.
CREATE TABLE
contains two different locations: <identifier>
and <location>
.
You can remove the LOCATION
clause from the CREATE TABLE
statement, or set
<config>
to true to skip this check.
Table <table>
does not contain enough records in non-archived files to satisfy specified LIMIT
of <limit>
records.
Found <numArchivedFiles>
potentially archived file(s) in table <table>
that need to be scanned as part of this query.
Archived files cannot be accessed. The current time until archival is configured as <archivalTime>
.
Please adjust your query filters to exclude any archived files.
Operation "<opName>
" is not allowed when the table has enabled change data feed (CDF) and has undergone schema changes using DROP COLUMN
or RENAME COLUMN
.
Cannot drop bloom filter indices for the following non-existent column(s): <unknownColumns>
OutOfMemoryError occurred while writing bloom filter indices for the following column(s): <columnsWithBloomFilterIndices>
.
You can reduce the memory footprint of bloom filter indices by choosing a smaller value for the 'numItems' option, a larger value for the 'fpp' option, or by indexing fewer columns.
Cannot change data type: <dataType>
Cannot change the 'location' of the Delta table using SET TBLPROPERTIES
. Please use ALTER TABLE SET LOCATION
instead.
'provider' is a reserved table property, and cannot be altered.
Cannot create bloom filter indices for the following non-existent column(s): <unknownCols>
Cannot create <path>
Cannot describe the history of a view.
Cannot drop bloom filter index on a non indexed column: <columnName>
Cannot drop the CHECK
constraints table feature.
The following constraints must be dropped first: <constraints>
.
Cannot drop the collations table feature.
Columns with non-default collations must be altered to using UTF8_BINARY first: <colNames>
.
Cannot evaluate expression: <expression>
Expecting a bucketing Delta table but cannot find the bucket spec in the table
Cannot generate code for expression: <expression>
This table is configured to only allow appends. If you would like to permit updates or deletes, use 'ALTER TABLE
<table_name> SET TBLPROPERTIES (<config>
=false)'.
<Command>
cannot override or unset in-commit timestamp table properties because coordinated commits is enabled in this table and depends on them. Please remove them ("delta.enableInCommitTimestamps", "delta.inCommitTimestampEnablementVersion", "delta.inCommitTimestampEnablementTimestamp") from the TBLPROPERTIES
clause and then retry the command again.
The Delta table configuration <prop>
cannot be specified by the user
<Command>
cannot override coordinated commits configurations for an existing target table. Please remove them ("delta.coordinatedCommits.commitCoordinator-preview", "delta.coordinatedCommits.commitCoordinatorConf-preview", "delta.coordinatedCommits.tableConf-preview") from the TBLPROPERTIES
clause and then retry the command again.
A uri (<uri>
) which can't be turned into a relative path was found in the transaction log.
A path (<path>
) which can't be relativized with the current input found in the
transaction log. Please re-run this as:
%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog("<userPath>
", true)
and then also run:
%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog("<path>
")
Cannot rename <currentPath>
to <newPath>
Table <tableName>
cannot be replaced as it does not exist. Use CREATE
OR REPLACE TABLE
to create the table.
Can't resolve column <columnName>
in <schema>
Cannot restore table to version <version>
. Available versions: [<startVersion>
, <endVersion>
].
Cannot restore table to timestamp (<requestedTimestamp>
) as it is before the earliest version available. Please use a timestamp after (<earliestTimestamp>
).
Cannot restore table to timestamp (<requestedTimestamp>
) as it is after the latest version available. Please use a timestamp before (<latestTimestamp>
)
<Command>
cannot set in-commit timestamp table properties together with coordinated commits, because the latter depends on the former and sets the former internally. Please remove them ("delta.enableInCommitTimestamps", "delta.inCommitTimestampEnablementVersion", "delta.inCommitTimestampEnablementTimestamp") from the TBLPROPERTIES
clause and then retry the command again.
Cannot change the location of a path based table.
Cannot set delta.managedDataSkippingStatsColumns on non-DLT table
ALTER
cannot unset coordinated commits configurations. To downgrade a table from coordinated commits, please try again using ALTER
TABLE [table-name] ``DROP FEATURE`` 'coordinatedCommits-preview'
.
Cannot update %1$s field %2$s type: update the element by updating %2$s.element
Cannot update %1$s field %2$s type: update a map by updating %2$s.key or %2$s.value
Cannot update <tableName>
field of type <typeName>
Cannot update <tableName>
field <fieldName>
type: update struct by adding, deleting, or updating its fields
Cannot use all columns for partition columns
VACUUM
LITE cannot delete all eligible files as some files are not referenced by the Delta log. Please run VACUUM FULL
.
<table>
is a view. Writes to a view are not supported.
Failed to write a value of <sourceType>
type into the <targetType>
type column <columnName>
due to an overflow.
Use try_cast
on the input value to tolerate overflow and return NULL
instead.
If necessary, set <storeAssignmentPolicyFlag>
to "LEGACY
" to bypass this error or set <updateAndMergeCastingFollowsAnsiEnabledFlag>
to true to revert to the old behaviour and follow <ansiEnabledFlag>
in UPDATE
and MERGE
.
Configuration delta.enableChangeDataFeed cannot be set. Change data feed from Delta is not yet available.
Retrieving table changes between version <start>
and <end>
failed because of an incompatible data schema.
Your read schema is <readSchema>
at version <readVersion>
, but we found an incompatible data schema at version <incompatibleVersion>
.
If possible, please retrieve the table changes using the end version's schema by setting <config>
to endVersion
, or contact support.
Retrieving table changes between version <start>
and <end>
failed because of an incompatible schema change.
Your read schema is <readSchema>
at version <readVersion>
, but we found an incompatible schema change at version <incompatibleVersion>
.
If possible, please query table changes separately from version <start>
to <incompatibleVersion>
- 1, and from version <incompatibleVersion>
to <end>
.
File <filePath>
referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE
statement. This request appears to be targeting Change Data Feed, if that is the case, this error can occur when the change data file is out of the retention period and has been deleted by the VACUUM
statement. For more information, see <faqPath>
Cannot write to table with delta.enableChangeDataFeed set. Change data feed from Delta is not available.
Cannot checkpoint a non-existing table <path>
. Did you manually delete files in the _delta_log directory?
Two paths were provided as the CLONE
target so it is ambiguous which to use. An external
location for CLONE
was provided at <externalLocation>
at the same time as the path
<targetIdentifier>
.
File (<fileName>
) not copied completely. Expected file size: <expectedSize>
, found: <actualSize>
. To continue with the operation by ignoring the file size check set <config>
to false.
Unsupported <mode>
clone source '<name>
', whose format is <format>
.
The supported formats are 'delta', 'iceberg' and 'parquet'.
CLONE
is not supported for Delta table with Liquid clustering for DBR version < 14.0.
CLUSTER BY
is not supported because the following column(s): <columnsWithDataTypes>
don't support data skipping.
The provided clustering columns do not match the existing table's.
- provided:
<providedClusteringColumns>
- existing:
<existingClusteringColumns>
Liquid clustering requires clustering columns to have stats. Couldn't find clustering column(s) '<columns>
' in stats schema:
<schema>
Creating an external table without liquid clustering from a table directory with liquid clustering is not allowed; path: <path>
.
'<operation>
' does not support clustering.
Cannot finish the <phaseOutType>
of the table with <tableFeatureToAdd>
table feature (reason: <reason>
). Please try the OPTIMIZE
command again.
== Error ==
<error>
REPLACE
a Delta table with Liquid clustering with a partitioned table is not allowed.
SHOW CREATE TABLE
is not supported for Delta table with Liquid clustering without any clustering columns.
Transition a Delta table with Liquid clustering to a partitioned table is not allowed for operation: <operation>
, when the existing table has non-empty clustering columns.
Please run ALTER TABLE CLUSTER BY
NONE to remove the clustering columns first.
Dynamic partition overwrite mode is not allowed for Delta table with Liquid clustering.
OPTIMIZE
command for Delta table with Liquid clustering doesn't support partition predicates. Please remove the predicates: <predicates>
.
OPTIMIZE
command for Delta table with Liquid clustering cannot specify ZORDER BY
. Please remove ZORDER BY (<zOrderBy>
).
CLUSTER BY
for Liquid clustering supports up to <numColumnsLimit>
clustering columns, but the table has <actualNumColumns>
clustering columns. Please remove the extra clustering columns.
It is not allowed to specify CLUSTER BY
when the schema is not defined. Please define schema for table <tableName>
.
Clustering and bucketing cannot both be specified. Please remove CLUSTERED BY INTO BUCKETS
/ bucketBy if you want to create a Delta table with clustering.
Clustering and partitioning cannot both be specified. Please remove PARTITIONED BY
/ partitionBy / partitionedBy if you want to create a Delta table with clustering.
Collations are not supported in Delta Lake.
Data skipping is not supported for partition column '<column>
'.
Data skipping is not supported for column '<column>
' of type <type>
.
The max column id property (<prop>
) is not set on a column mapping enabled table.
The max column id property (<prop>
) on a column mapping enabled table is <tableMax>
, which cannot be smaller than the max column id for all fields (<fieldMax>
).
The data type of the column <colName>
was not provided.
Unable to find the column <columnName>
given [<columnList>
]
Unable to find the column '<targetCol>
' of the target table from the INSERT
columns: <colNames>
. INSERT
clause must specify value for all the columns of the target table.
Couldn't find column <columnName>
in:
<tableSchema>
Expected <columnPath>
to be a nested data type, but found <other>
. Was looking for the
index of <column>
in a nested field.
Schema:
<schema>
Struct column <source>
cannot be inserted into a <targetType>
field <targetField>
in <targetTable>
.
Cannot handle commit of table within redirect table state '<state>
'.
The validation of the compaction of path <compactedPath>
to <newPath>
failed: Please file a bug report.
Found nested NullType in column <columName>
which is of <dataType>
. Delta doesn't support writing NullType in complex types.
ConcurrentAppendException: Files were added to <partition>
by a concurrent update. <retryMsg> <conflictingCommit>
Refer to <docLink>
for more details.
ConcurrentDeleteDeleteException: This transaction attempted to delete one or more files that were deleted (for example <file>
) by a concurrent update. Please try the operation again.<conflictingCommit>
Refer to <docLink>
for more details.
ConcurrentDeleteReadException: This transaction attempted to read one or more files that were deleted (for example <file>
) by a concurrent update. Please try the operation again.<conflictingCommit>
Refer to <docLink>
for more details.
ConcurrentTransactionException: This error occurs when multiple streaming queries are using the same checkpoint to write into this table. Did you run multiple instances of the same streaming query at the same time?<conflictingCommit>
Refer to <docLink>
for more details.
ConcurrentWriteException: A concurrent transaction has written new data since the current transaction read the table. Please try the operation again.<conflictingCommit>
Refer to <docLink>
for more details.
There is a conflict from these SET
columns: <columnList>
.
During <command>
, configuration "<configuration>
" cannot be set from the command. Please remove it from the TBLPROPERTIES
clause and then retry the command again.
During <command>
, configuration "<configuration>
" cannot be set from the SparkSession configurations. Please unset it by running spark.conf.unset("<configuration>")
and then retry the command again.
Constraint '<constraintName>
' already exists. Please delete the old constraint first.
Old constraint:
<oldConstraint>
Column <columnName>
has data type <columnType>
and cannot be altered to data type <dataType>
because this column is referenced by the following check constraint(s):
<constraints>
Cannot alter column <columnName>
because this column is referenced by the following check constraint(s):
<constraints>
Cannot drop nonexistent constraint <constraintName>
from table <tableName>
. To avoid throwing an error, provide the parameter IF EXISTS
or set the SQL session configuration <config>
to <confValue>
.
Conversion of Merge-On-Read <format>
table is not supported: <path>
, <hint>
Found no partition information in the catalog for table <tableName>
. Have you run "MSCK REPAIR TABLE
" on your table to discover partitions?
Cannot convert Parquet table with collated partition column <colName>
to Delta.
The configuration '<config>
' cannot be set to <mode>
when using CONVERT
TO DELTA
.
Unsupported schema changes found for <format>
table: <path>
, <hint>
CONVERT
TO DELTA
only supports parquet tables, but you are trying to convert a <sourceName>
source: <tableId>
Cannot enable row tracking without collecting statistics.
If you want to enable row tracking, do the following:
- Enable statistics collection by running the command
SET <statisticsCollectionPropertyKey>
= true
- Run
CONVERT
TODELTA
without the NOSTATISTICS
option.
If you do not want to collect statistics, disable row tracking:
- Deactivate enabling the table feature by default by running the command:
RESET <rowTrackingTableFeatureDefaultKey>
- Deactivate the table property by default by running:
SET <rowTrackingDefaultPropertyKey>
= false
COPY INTO
target must be a Delta table.
You are trying to create an external table <tableName>
from <path>
using Delta, but the schema is not specified when the
input path is empty.
To learn more about Delta, see <docLink>
You are trying to create an external table <tableName>
from %2$s
using Delta, but there is no transaction log present at
%2$s/_delta_log
. Check the upstream job to make sure that it is writing using
format("delta") and that the path is the root of the table.
To learn more about Delta, see <docLink>
Creating path-based Delta table with a different location isn't supported. Identifier: <identifier>
, Location: <location>
Table name or location has to be specified.
The specified schema does not match the existing schema at <path>
.
== Specified ==
<specifiedSchema>
== Existing ==
<existingSchema>
== Differences ==
<schemaDifferences>
If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
Cannot enable <tableFeature>
table feature using TBLPROPERTIES
. Please use CREATE
OR REPLACE TABLE CLUSTER BY
to create a Delta table with clustering.
The specified clustering columns do not match the existing clustering columns at <path>
.
== Specified ==
<specifiedColumns>
== Existing ==
<existingColumns>
The specified partitioning does not match the existing partitioning at <path>
.
== Specified ==
<specifiedColumns>
== Existing ==
<existingColumns>
The specified properties do not match the existing properties at <path>
.
== Specified ==
<specifiedProperties>
== Existing ==
<existingProperties>
Cannot create table ('<tableId>
'). The associated location ('<tableLocation>
') is not empty and also not a Delta table.
Cannot change table metadata because the 'dataChange' option is set to false. Attempted operation: '<op>
'.
File <filePath>
referenced in the transaction log cannot be found. This parquet file may be deleted under Delta's data retention policy.
Default Delta data retention duration: <logRetentionPeriod>
. Modification time of the parquet file: <modificationTime>
. Deletion time of the parquet file: <deletionTime>
. Deleted on Delta version: <deletionVersion>
.
It is invalid to commit files with deletion vectors that are missing the numRecords statistic.
Detected DomainMetadata action(s) for domains <domainNames>
, but DomainMetadataTableFeature is not enabled.
Index <columnIndex>
to drop column is lower than 0
Cannot drop column from a schema with a single column. Schema:
<schema>
File operation '<actionType>
' for path <path>
was specified several times.
It conflicts with <conflictingPath>
.
It is not valid for multiple file operations with the same path to exist in a single commit.
Found duplicate column(s) <coltype>
: <duplicateCols>
Duplicate column names in INSERT
clause
<message>
Please remove duplicate columns before you update your table.
Duplicated data skipping columns found: <columns>
.
Internal error: two DomainMetadata actions within the same transaction have the same domain <domainName>
Could not deserialize the deleted record counts histogram during table integrity verification.
Dynamic partition overwrite mode is specified by session config or write options, but it is disabled by spark.databricks.delta.dynamicPartitionOverwrite.enabled=false
.
Data used in creating the Delta table doesn't have any columns.
No file found in the directory: <directory>
.
Value "<value>
" exceeds char/varchar type length limitation. Failed check: <expr>
.
Failed to cast partition value <value>
to <dataType>
Could not find <newAttributeName>
among the existing target output <targetOutputColumns>
Failed to infer schema from the given list of files.
Failed to merge schema of file <file>
:
<schema>
Could not read footer for file: <currentFile>
Cannot recognize the predicate '<predicate>
'
Expect a full scan of the latest version of the Delta source, but found a historical scan of version <historicalVersion>
Failed to merge fields '<currentField>
' and '<updateField>
'
Unable to operate on this table because the following table features are enabled in metadata but not listed in protocol: <features>
.
Your table schema requires manually enablement of the following table feature(s): <unsupportedFeatures>
.
To do this, run the following command for each of features listed above:
ALTER TABLE
table_name SET TBLPROPERTIES
('delta.feature.feature_name' = 'supported')
Replace "table_name" and "feature_name" with real values.
Current supported feature(s): <supportedFeatures>
.
Dropping <featureName>
failed due to a failure in checkpoint creation.
Please try again later. It the issue persists, contact Databricks support.
Cannot drop feature because a concurrent transaction modified the table.
Please try the operation again.
<concurrentCommit>
Cannot drop table feature <feature>
because some other features (<dependentFeatures>
) in this table depends on <feature>
.
Consider dropping them first before dropping this feature.
Cannot drop <feature>
from this table because it is not currently present in the table's protocol.
Cannot drop <feature>
because the Delta log contains historical versions that use the feature.
Please wait until the history retention period (<logRetentionPeriodKey>=<logRetentionPeriod>
)
has passed since the feature was last active.
Alternatively, please wait for the TRUNCATE HISTORY
retention period to expire (<truncateHistoryLogRetentionPeriod>
)
and then run:
ALTER TABLE
table_name DROP FEATURE
feature_name TRUNCATE HISTORY
The particular feature does not require history truncation.
Cannot drop <feature>
because dropping this feature is not supported.
Please contact Databricks support.
Cannot drop <feature>
because it is not supported by this Databricks version.
Consider using Databricks with a higher version.
Dropping <feature>
was partially successful.
The feature is now no longer used in the current version of the table. However, the feature
is still present in historical versions of the table. The table feature cannot be dropped
from the table protocol until these historical versions have expired.
To drop the table feature from the protocol, please wait for the historical versions to
expire, and then repeat this command. The retention period for historical versions is
currently configured as <logRetentionPeriodKey>=<logRetentionPeriod>
.
Alternatively, please wait for the TRUNCATE HISTORY
retention period to expire (<truncateHistoryLogRetentionPeriod>
)
and then run:
ALTER TABLE
table_name DROP FEATURE
feature_name TRUNCATE HISTORY
Unable to enable table feature <feature>
because it requires a higher reader protocol version (current <current>
). Consider upgrading the table's reader protocol version to <required>
, or to a version which supports reader table features. Refer to <docLink>
for more information on table protocol versions.
Unable to enable table feature <feature>
because it requires a higher writer protocol version (current <current>
). Consider upgrading the table's writer protocol version to <required>
, or to a version which supports writer table features. Refer to <docLink>
for more information on table protocol versions.
Existing file path <path>
Cannot specify both file list and pattern string.
File path <path>
File <filePath>
referenced in the transaction log cannot be found. This occurs when data has been manually deleted from the file system rather than using the table DELETE
statement. For more information, see <faqPath>
No such file or directory: <path>
File (<path>
) to be rewritten not found among candidate files:
<pathList>
A MapType was found. In order to access the key or value of a MapType, specify one
of:
<key>
or
<value>
followed by the name of the column (only if that column is a struct type).
e.g. mymap.key.mykey
If the column is a basic type, mymap.key or mymap.value is sufficient.
Schema:
<schema>
Column <columnName>
has data type <columnType>
and cannot be altered to data type <dataType>
because this column is referenced by the following generated column(s):
<generatedColumns>
Cannot alter column <columnName>
because this column is referenced by the following generated column(s):
<generatedColumns>
The expression type of the generated column <columnName>
is <expressionType>
, but the column type is <columnType>
Column <currentName>
is a generated column or a column used by a generated column. The data type is <currentDataType>
and cannot be converted to data type <updateDataType>
The validation of IcebergCompatV<version>
has failed.
For more details see DELTA_ICEBERG_COMPAT_VIOLATION
ALTER TABLE ALTER COLUMN
is not supported for IDENTITY
columns.
ALTER TABLE ALTER COLUMN SYNC IDENTITY
is only supported by Delta.
ALTER TABLE ALTER COLUMN SYNC IDENTITY
cannot be called on non IDENTITY
columns.
Providing values for GENERATED ALWAYS
AS IDENTITY
column <colName>
is not supported.
IDENTITY
column step cannot be 0.
IDENTITY
columns are only supported by Delta.
PARTITIONED BY IDENTITY
column <colName>
is not supported.
ALTER TABLE REPLACE COLUMNS
is not supported for table with IDENTITY
columns.
DataType <dataType>
is not supported for IDENTITY
columns.
UPDATE
on IDENTITY
column <colName>
is not supported.
IDENTITY
column cannot be specified with a generated column expression.
Invalid value '<input>
' for option '<name>
', <explain>
The usage of <option>
is not allowed when <operation>
a Delta table.
BucketSpec on Delta bucketed table does not match BucketSpec from metadata.Expected: <expected>
. Actual: <actual>
.
(<setKeys>
) cannot be set to different values. Please only set one of them, or set them to the same value.
Incorrectly accessing an ArrayType. Use arrayname.element.elementname position to
add to an array.
An ArrayType was found. In order to access elements of an ArrayType, specify
<rightName>
instead of <wrongName>
.
Schema:
<schema>
Use getConf()
instead of `conf.getConf()
The error typically occurs when the default LogStore implementation, that
is, HDFSLogStore, is used to write into a Delta table on a non-HDFS storage system.
In order to get the transactional ACID guarantees on table updates, you have to use the
correct implementation of LogStore that is appropriate for your storage system.
See <docLink>
for details.
Index <position>
to drop column equals to or is larger than struct length: <length>
Index <index>
to add column <columnName>
is larger than struct length: <length>
Cannot write to '<tableName>
', <columnName>
; target table has <numColumns>
column(s) but the inserted data has <insertColumns>
column(s)
Column <columnName>
is not specified in INSERT
Invalid auto-compact type: <value>
. Allowed values are: <allowed>
.
Invalid bucket count: <invalidBucketCount>
. Bucket count should be a positive number that is power of 2 and at least 8. You can use <validBucketCount>
instead.
Cannot find the bucket column in the partition columns
Interval cannot be null or blank.
CDC range from start <start>
to end <end>
was invalid. End cannot be before start.
Attribute name "<columnName>
" contains invalid character(s) among " ,;{}()\n\t=". Please use alias to rename it.
Found invalid character(s) among ' ,;{}()nt=' in the column names of your schema.
Invalid column names: <invalidColumnNames>
.
Please use other characters and try again.
Alternatively, enable Column Mapping to keep using these characters.
The target location for CLONE
needs to be an absolute path or table name. Use an
absolute path instead of <path>
.
Found invalid character(s) among ' ,;{}()nt=' in the column names of your schema.
Invalid column names: <invalidColumnNames>
.
Column mapping cannot be removed when there are invalid characters in the column names.
Please rename the columns to remove the invalid characters and execute this command again.
Incompatible format detected.
A transaction log for Delta was found at <deltaRootPath>
/_delta_log``,
but you are trying to <operation> <path>
using format("<format>
"). You must use
'format("delta")' when reading and writing to a delta table.
To learn more about Delta, see <docLink>
A generated column cannot use a non-existent column or another generated column
Invalid options for idempotent Dataframe writes: <reason>
<interval>
is not a valid INTERVAL
.
The schema for the specified INVENTORY
does not contain all of the required fields. Required fields are: <expectedSchema>
invalid isolation level '<isolationLevel>
'
(<classConfig>
) and (<schemeConfig>
) cannot be set at the same time. Please set only one group of them.
You are trying to create a managed table <tableName>
using Delta, but the schema is not specified.
To learn more about Delta, see <docLink>
<columnName>
is not a valid partition column in table <tableName>
.
Found partition columns having invalid character(s) among " ,;{}()nt=". Please change the name to your partition columns. This check can be turned off by setting spark.conf.set("spark.databricks.delta.partitionColumnValidity.enabled", false) however this is not recommended as other features of Delta may not work properly.
Using column <name>
of type <dataType>
as a partition column is not supported.
A partition path fragment should be the form like part1=foo/part2=bar
. The partition path: <path>
Protocol version cannot be downgraded from <oldProtocol>
to <newProtocol>
Unsupported Delta protocol version: table "<tableNameOrPath>
" requires reader version <readerRequired>
and writer version <writerRequired>
, but this version of Databricks supports reader versions <supportedReaders>
and writer versions <supportedWriters>
. Please upgrade to a newer release.
Function <function>
is an unsupported table valued function for CDC reads.
The provided timestamp <timestamp>
does not match the expected syntax <format>
.
A Delta log already exists at <path>
If you never deleted it, it's likely your query is lagging behind. Please delete its checkpoint to restart from scratch. To avoid this happening again, you can update your retention policy of your Delta table
Materialized <rowTrackingColumn>
column name missing for <tableName>
.
Please use a limit less than Int.MaxValue - 8.
This commit has failed as it has been tried <numAttempts>
times but did not succeed.
This can be caused by the Delta table being committed continuously by many concurrent
commits.
Commit started at version: <startVersion>
Commit failed at version: <failVersion>
Number of actions attempted to commit: <numActions>
Total time spent attempting this commit: <timeSpent>
ms
File list must have at most <maxFileListSize>
entries, had <numFiles>
.
Cannot add column <newColumn>
with type VOID. Please explicitly specify a non-void type.
Failed to merge incompatible data types <currentDataType>
and <updateDataType>
Failed to merge decimal types with incompatible <decimalRanges>
Keeping the source of the MERGE
statement materialized has failed repeatedly.
There must be at least one WHEN
clause in a MERGE
statement.
Resolved attribute(s) <missingAttributes>
missing from <input>
in operator <merge>
Unexpected assignment key: <unexpectedKeyClass>
- <unexpectedKeyObject>
Cannot resolve <sqlExpr>
in <clause>
given <cols>
.
MetadataChangedException: The metadata of the Delta table has been changed by a concurrent update. Please try the operation again.<conflictingCommit>
Refer to <docLink>
for more details.
Error getting change data for range [<startVersion>
, <endVersion>
] as change data was not
recorded for version [<version>
]. If you've enabled change data feed on this table,
use DESCRIBE HISTORY
to see when it was first enabled.
Otherwise, to start recording change data, use ALTER
TABLE` table_name SET TBLPROPERTIES
(<key>
=true)`.
Cannot find <columnName>
in table columns: <columnList>
This table has the feature <featureName>
enabled which requires the presence of the CommitInfo action in every commit. However, the CommitInfo action is missing from commit version <version>
.
This table has the feature <featureName>
enabled which requires the presence of commitTimestamp in the CommitInfo action. However, this field has not been set in commit version <version>
.
<tableName>
is not a Delta table.
Table doesn't exist. Create an empty Delta table first using CREATE TABLE <tableName>
.
Iceberg class was not found. Please ensure Delta Iceberg support is installed.
Please refer to <docLink>
for more details.
Column <columnName>
, which has a NOT NULL
constraint, is missing from the data being written into the table.
Partition column <columnName>
not found in schema <columnList>
Couldn't find all part files of the checkpoint version: <version>
CONVERT
TO DELTA
only supports parquet tables. Please rewrite your target as parquet.<path>
if it's a parquet directory.
SET
column <columnName>
not found given columns: <columnList>
.
Incompatible format detected.
You are trying to <operation> <path>
using Delta, but there is no
transaction log present. Check the upstream job to make sure that it is writing
using format("delta") and that you are trying to %1$s the table base path.
To learn more about Delta, see <docLink>
Specified mode '<mode>
' is not supported. Supported modes are: <supportedModes>
Multiple <startingOrEnding>
arguments provided for CDC read. Please provide one of either <startingOrEnding>
Timestamp or <startingOrEnding>
Version.
Multiple bloom filter index configurations passed to command for column: <columnName>
Cannot perform Merge as multiple source rows matched and attempted to modify the same
target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge,
when multiple source rows match on the same target row, the result may be ambiguous
as it is unclear which source row should be used to update or delete the matching
target row. You can preprocess the source table to eliminate the possibility of
multiple matches. Please refer to
<usageReference>
During <command>
, either both coordinated commits configurations ("delta.coordinatedCommits.commitCoordinator-preview", "delta.coordinatedCommits.commitCoordinatorConf-preview") are set in the command or neither of them. Missing: "<configuration>
". Please specify this configuration in the TBLPROPERTIES
clause or remove the other configuration, and then retry the command again.
During <command>
, either both coordinated commits configurations ("coordinatedCommits.commitCoordinator-preview", "coordinatedCommits.commitCoordinatorConf-preview") are set in the SparkSession configurations or neither of them. Missing: "<configuration>
". Please set this configuration in the SparkSession or unset the other configuration, and then retry the command again.
The following column name(s) are reserved for Delta bucketed table internal usage only: <names>
The input schema contains nested fields that are capitalized differently than the target table.
They need to be renamed to avoid the loss of data in these fields while writing to Delta.
Fields:
<fields>
.
Original schema:
<schema>
The <nestType>
type of the field <parent>
contains a NOT NULL
constraint. Delta does not support NOT NULL
constraints nested within arrays or maps. To suppress this error and silently ignore the specified constraints, set <configKey>
= true.
Parsed <nestType>
type:
<nestedPrettyJson>
Nested subquery is not supported in the <operation>
condition.
<numRows>
rows in <tableName>
violate the new CHECK
constraint (<checkConstraint>
)
<numRows>
rows in <tableName>
violate the new NOT NULL
constraint on <colName>
CHECK
constraint '<name>
' (<expr>
) should be a boolean expression.
Found <expr>
. A generated column cannot use a non deterministic expression.
Non-deterministic functions are not supported in the <operation> <expression>
When there are more than one MATCHED
clauses in a MERGE
statement, only the last MATCHED
clause can omit the condition.
When there are more than one NOT MATCHED BY SOURCE
clauses in a MERGE
statement, only the last NOT MATCHED BY SOURCE
clause can omit the condition.
When there are more than one NOT MATCHED
clauses in a MERGE
statement, only the last NOT MATCHED
clause can omit the condition
Could not parse tag <tag>
.
File tags are: <tagList>
Data written into Delta needs to contain at least one non-partitioned column.<details>
Predicate references non-partition column '<columnName>
'. Only the partition columns may be referenced: [<columnList>
]
Non-partitioning column(s) <columnList>
are specified where only partitioning columns are expected: <fragment>
.
Delta catalog requires a single-part namespace, but <identifier>
is multi-part.
<table>
is not a Delta table. Please drop this table first if you would like to create it with Databricks Delta.
<tableName>
is not a Delta table. Please drop this table first if you would like to recreate it with Delta Lake.
Not nullable column not found in struct: <struct>
NOT NULL
constraint violated for column: <columnName>
.
A non-nullable nested field can't be added to a nullable parent. Please set the nullability of the parent column accordingly.
No commits found at <logPath>
No recreatable commits found at <logPath>
Operation not allowed: <operation>
cannot be performed on a table with redirect feature.
The no redirect rules are not satisfied <noRedirectRules>
.
Table <tableIdent>
not found
No startingVersion or startingTimestamp provided for CDC read.
Delta doesn't accept NullTypes in the schema for streaming writes.
Please either provide 'timestampAsOf' or 'versionAsOf' for time travel.
<operation>
is only supported for Delta tables.
Please provide the path or table identifier for <operation>
.
Operation not allowed: <operation>
is not supported for Delta tables
Operation not allowed: <operation>
is not supported for Delta tables: <tableName>
<operation>
is not supported for column <colName>
with non-default collation <collation>
.
<operation>
is not supported for expression <exprText>
because it uses non-default collation.
<operation>
command on a temp view referring to a Delta table that contains generated columns is not supported. Please run the <operation>
command on the Delta table directly
Operation not allowed: <operation>
cannot be performed on a view.
OPTIMIZE FULL
is only supported for clustered tables with non-empty clustering columns.
Copy option overwriteSchema cannot be specified without setting OVERWRITE
= 'true'.
'overwriteSchema' cannot be used in dynamic partition overwrite mode.
Failed to cast value <value>
to <dataType>
for partition column <columnName>
Partition column <columnName>
not found in schema [<schemaMap>
]
Partition schema cannot be specified when converting Iceberg tables. It is automatically inferred.
<path>
doesn't exist, or is not a Delta table.
Cannot write to already existent path <path>
without setting OVERWRITE
= 'true'.
Committing to the Delta table version <version>
succeeded but error while executing post-commit hook <name> <message>
ProtocolChangedException: The protocol version of the Delta table has been changed by a concurrent update. <additionalInfo> <conflictingCommit>
Refer to <docLink>
for more details.
Protocol property <key>
needs to be an integer. Found <value>
Unable to upgrade only the reader protocol version to use table features. Writer protocol version must be at least <writerVersion>
to proceed. Refer to <docLink>
for more information on table protocol versions.
You are trying to read a Delta table <tableName>
that does not have any columns.
Write some new data with the option mergeSchema = true
to be able to read the table.
Please recheck your syntax for '<regExpOption>
'
You can't use replaceWhere in conjunction with an overwrite by filter
Written data does not conform to partial table overwrite condition or constraint '<replaceWhere>
'.
<message>
A 'replaceWhere' expression and 'partitionOverwriteMode'='dynamic' cannot both be set in the DataFrameWriter options.
'replaceWhere' cannot be used with data filters when 'dataChange' is set to false. Filters: <dataFilters>
Cannot assign row IDs without row count statistics.
Collect statistics for the table by running the following code in a Scala notebook and retry:
import com.databricks.sql.transaction.tahoe.DeltaLog
import com.databricks.sql.transaction.tahoe.stats.StatisticsCollection
import org.apache.spark.sql.catalyst.TableIdentifier
val log = DeltaLog.forTable(spark, TableIdentifier(table_name))
StatisticsCollection.recompute(spark, log)
Detected schema change:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory.
Detected schema change in version <version>
:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory. If the issue persists after
changing to a new checkpoint directory, you may need to change the existing
'startingVersion' or 'startingTimestamp' option to start from a version newer than
<version>
with a new checkpoint directory.
Detected schema change in version <version>
:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory.
The schema of your Delta table has changed in an incompatible way since your DataFrame
or DeltaTable object was created. Please redefine your DataFrame or DeltaTable object.
Changes:
<schemaDiff> <legacyFlagMessage>
Table schema is not provided. Please provide the schema (column definition) of the table when using REPLACE
table and an AS SELECT
query is not provided.
Table schema is not set. Write data into it or use CREATE TABLE
to set the schema.
The schema of the new Delta location is different than the current table schema.
original schema:
<original>
destination schema:
<destination>
If this is an intended change, you may turn this check off by running:
%%sql set <config>
= true
File <filePath>
referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE
statement. This table appears to be a shallow clone, if that is the case, this error can occur when the original table from which this table was cloned has deleted a file that the clone is still using. If you want any clones to be independent of the original table, use a DEEP clone instead.
Pre-defined properties that start with <prefix>
cannot be modified.
The data is restricted by recipient property <property>
that do not apply to the current recipient in the session.
For more details see DELTA_SHARING_CURRENT_RECIPIENT_PROPERTY_UNDEFINED
<operation>
cannot be used in Delta Sharing views that are shared cross account.
Illegal authentication type <authenticationType>
for provider <provider>
.
Illegal authentication type <authenticationType>
for recipient <recipient>
.
Invalid name to reference a <type>
inside a Share. You can either use <type>
's name inside the share following the format of [schema].[<type>
], or you can also use table's original full name following the format of [catalog].[schema].[>type>].
If you are unsure about what name to use, you can run "SHOW ALL IN SHARE
[share]" and find the name of the <type>
to remove: column "name" is the <type>
's name inside the share and column "shared_object" is the <type>
's original full name.
There are more than two tokens for recipient <recipient>
.
Recipient property <property>
does not exist.
Recipient tokens are missing for recipient <recipient>
.
Non-partitioning column(s) <badCols>
are specified for SHOW PARTITIONS
SHOW PARTITIONS
is not allowed on a table that is not partitioned: <tableName>
Detected deleted data (for example <removedFile>
) from streaming source at version <version>
. This is currently not supported. If you'd like to ignore deletes, set the option 'ignoreDeletes' to 'true'. The source table can be found at path <dataPath>
.
Detected a data update (for example <file>
) in the source table at version <version>
. This is currently not supported. If this is going to happen regularly and you are okay to skip changes, set the option 'skipChangeCommits' to 'true'. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory or do a full refresh if you are using DLT. If you need to handle these changes, please switch to MVs. The source table can be found at path <dataPath>
.
Please either provide '<version>
' or '<timestamp>
'
<statsType>
stats not found for column in Parquet metadata: <columnPath>
.
We've detected one or more non-additive schema change(s) (<opType>
) between Delta version <previousSchemaChangeVersion>
and <currentSchemaChangeVersion>
in the Delta streaming source.
Please check if you want to manually propagate the schema change(s) to the sink table before we proceed with stream processing using the finalized schema at <currentSchemaChangeVersion>
.
Once you have fixed the schema of the sink table or have decided there is no need to fix, you can set (one of) the following SQL configurations to unblock the non-additive schema change(s) and continue stream processing.
To unblock for this particular stream just for this series of schema change(s): set <allowCkptVerKey> = <allowCkptVerValue>
.
To unblock for this particular stream: set <allowCkptKey> = <allowCkptValue>
To unblock for all streams: set <allowAllKey> = <allowAllValue>
.
Alternatively if applicable, you may replace the <allowAllMode>
with <opSpecificMode>
in the SQL conf to unblock stream for just this schema change type.
Failed to obtain Delta log snapshot for the start version when checking column mapping schema changes. Please choose a different start version, or force enable streaming read at your own risk by setting '<config>
' to 'true'.
Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).
For further information and possible next steps to resolve this issue, please review the documentation at <docLink>
Read schema: <readSchema>
. Incompatible data schema: <incompatibleSchema>
.
Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).
Please provide a 'schemaTrackingLocation' to enable non-additive schema evolution for Delta stream processing.
See <docLink>
for more details.
Read schema: <readSchema>
. Incompatible data schema: <incompatibleSchema>
.
The schema, table configuration or protocol of your Delta table has changed during streaming.
The schema or metadata tracking log has been updated.
Please restart the stream to continue processing using the updated metadata.
Updated schema: <schema>
.
Updated table configurations: <config>
.
Updated table protocol: <protocol>
Streaming from source table <tableId>
with schema tracking does not support row filters or column masks.
Please drop the row filters or column masks, or disable schema tracking.
Detected conflicting schema location '<loc>
' while streaming from table or table located at '<table>
'.
Another stream may be reusing the same schema location, which is not allowed.
Please provide a new unique schemaTrackingLocation
path or streamingSourceTrackingId
as a reader option for one of the streams from this table.
Schema location '<schemaTrackingLocation>
' must be placed under checkpoint location '<checkpointLocation>
'.
Incomplete log file in the Delta streaming source schema log at '<location>
'.
The schema log may have been corrupted. Please pick a new schema location.
Detected incompatible Delta table id when trying to read Delta stream.
Persisted table id: <persistedId>
, Table id: <tableId>
The schema log might have been reused. Please pick a new schema location.
Detected incompatible partition schema when trying to read Delta stream.
Persisted schema: <persistedSchema>
, Delta partition schema: <partitionSchema>
Please pick a new schema location to reinitialize the schema log if you have manually changed the table's partition schema recently.
We could not initialize the Delta streaming source schema log because
we detected an incompatible schema or protocol change while serving a streaming batch from table version <a>
to <b>
.
Failed to parse the schema from the Delta streaming source schema log.
The schema log may have been corrupted. Please pick a new schema location.
Unable to enable Change Data Capture on the table. The table already contains
reserved columns <columnList>
that will
be used internally as metadata for the table's Change Data Feed. To enable
Change Data Feed on the table rename/drop these columns.
Table <tableName>
already exists.
Currently DeltaTable.forPath only supports hadoop configuration keys starting with <allowedPrefixes>
but got <unsupportedOptions>
The Delta table at <tableLocation>
has been replaced while this command was using the table.
Table id was <oldId>
but is now <newId>
.
Please retry the current command to ensure it reads a consistent view of the table.
The location of the existing table <tableName>
is <existingTableLocation>
. It doesn't match the specified location <tableLocation>
.
Delta table <tableName>
doesn't exist.
Table is not supported in <operation>
. Please use a path instead.
<tableName>
is not a Delta table. <operation>
is only supported for Delta tables.
Target table final schema is empty.
The provided timestamp (<providedTimestamp>
) is after the latest version available to this
table (<tableName>
). Please use a timestamp before or at <maximumTimestamp>
.
The provided timestamp (<expr>
) cannot be converted to a valid timestamp.
<timeTravelKey>
needs to be a valid begin value.
<path>
: Unable to reconstruct state at version <version>
as the transaction log has been truncated due to manual deletion or the log retention policy (<logRetentionKey>=<logRetention>
) and checkpoint retention policy (<checkpointRetentionKey>=<checkpointRetention>
)
Operation not allowed: TRUNCATE TABLE
on Delta tables does not support partition predicates; use DELETE
to delete specific partitions or rows.
Found <udfExpr>
. A generated column cannot use a user-defined function
Unexpected action expression <expression>
.
Expecting <expectedColsSize>
partition column(s): <expectedCols>
, but found <parsedColsSize>
partition column(s): <parsedCols>
from parsing the file name: <path>
Expect a full scan of Delta sources, but found a partial scan. path:<path>
Expecting partition column <expectedCol>
, but found partition column <parsedCol>
from parsing the file name: <path>
CONVERT
TO DELTA
was called with a partition schema different from the partition schema inferred from the catalog, please avoid providing the schema so that the partition schema can be chosen from the catalog.
catalog partition schema:
<catalogPartitionSchema>
provided partition schema:
<userPartitionSchema>
delta.universalFormat.compatibility.location cannot be changed.
delta.universalFormat.compatibility.location is not registered in the catalog.
Missing or invalid location for Uniform compatibility format. Please set an empty directory for delta.universalFormat.compatibility.location.
Failed reason:
For more details see DELTA_UNIFORM_COMPATIBILITY_MISSING_OR_INVALID_LOCATION
Read Iceberg with Delta Uniform has failed.
For more details see DELTA_UNIFORM_ICEBERG_INGRESS_VIOLATION
Create or Refresh Uniform ingress table is not supported.
Format <fileFormat>
is not supported. Only iceberg as original file format is supported.
Universal Format is only supported on Unity Catalog tables.
REFRESH
identifier SYNC UNIFORM
is not supported for reason:
For more details see DELTA_UNIFORM_REFRESH_NOT_SUPPORTED
REFRESH TABLE
with METADATA_PATH
is not supported for managed Iceberg tables
Failed to convert the table version <version>
to the universal format <format>
. <message>
The validation of Universal Format (<format>
) has failed: <violation>
Unknown configuration was specified: <config>
Unknown privilege: <privilege>
Unknown ReadLimit: <limit>
Unrecognized column change <otherClass>
. You may be running an out-of-date Delta Lake version.
Unrecognized invariant. Please upgrade your Spark version.
Unrecognized log file <fileName>
Attempted to unset non-existent property '<property>
' in table <tableName>
<path>
does not support adding files with an absolute path
ALTER TABLE CHANGE COLUMN
is not supported for changing column <fieldPath>
from <oldField>
to <newField>
Unsupported ALTER TABLE REPLACE COLUMNS
operation. Reason: <details>
Failed to change schema from:
<oldSchema>
to:
<newSchema>
You tried to REPLACE
an existing table (<tableName>
) with CLONE
. This operation is
unsupported. Try a different target for CLONE
or delete the table at the current target.
Changing column mapping mode from '<oldMode>
' to '<newMode>
' is not supported.
Your current table protocol version does not support changing column mapping modes
using <config>
.
Required Delta protocol version for column mapping:
<requiredVersion>
Your table's current Delta protocol version:
<currentVersion>
<advice>
Schema change is detected:
old schema:
<oldTableSchema>
new schema:
<newTableSchema>
Schema changes are not allowed during the change of column mapping mode.
Writing data with column mapping mode is not supported.
Creating a bloom filter index on a column with type <dataType>
is unsupported: <columnName>
Can't add a comment to <fieldPath>
. Adding a comment to a map key/value or array element is not supported.
Found columns using unsupported data types: <dataTypeList>
. You can set '<config>
' to 'false' to disable the type check. Disabling this type check may allow users to create unsupported Delta tables and should only be used when trying to read/write legacy tables.
<dataType>
cannot be the result of a generated column
Deep clone is not supported for this Delta version.
<view>
is a view. DESCRIBE DETAIL
is only supported for tables.
Dropping clustering columns (<columnList>
) is not allowed.
DROP COLUMN
is not supported for your Delta table. <advice>
Can only drop nested columns from StructType. Found <struct>
Dropping partition columns (<columnList>
) is not allowed.
Unsupported expression type(<expType>
) for <causedBy>
. The supported types are [<supportedTypes>
].
<expression>
cannot be used in a generated column
Unsupported Delta read feature: table "<tableNameOrPath>
" requires reader table feature(s) that are unsupported by this version of Databricks: <unsupported>
. Please refer to <link>
for more information on Delta Lake feature compatibility.
Unsupported Delta write feature: table "<tableNameOrPath>
" requires writer table feature(s) that are unsupported by this version of Databricks: <unsupported>
. Please refer to <link>
for more information on Delta Lake feature compatibility.
Table feature(s) configured in the following Spark configs or Delta table properties are not recognized by this version of Databricks: <configs>
.
Expecting the status for table feature <feature>
to be "supported", but got "<status>
".
Updating nested fields is only supported for StructType, but you are trying to update a field of <columnName>
, which is of type: <dataType>
.
The 'FSCK REPAIR TABLE
' command is not supported on table versions with missing deletion vector files.
Please contact support.
The 'GENERATE
symlink_format_manifest' command is not supported on table versions with deletion vectors.
In order to produce a version of the table without deletion vectors, run 'REORG TABLE
table APPLY (PURGE
)'. Then re-run the 'GENERATE
' command.
Make sure that no concurrent transactions are adding deletion vectors again between REORG
and GENERATE
.
If you need to generate manifests regularly, or you cannot prevent concurrent transactions, consider disabling deletion vectors on this table using 'ALTER TABLE
table SET TBLPROPERTIES
(delta.enableDeletionVectors = false)'.
Invariants on nested fields other than StructTypes are not supported.
In subquery is not supported in the <operation>
condition.
listKeywithPrefix not available
Manifest generation is not supported for tables that leverage column mapping, as external readers cannot read these Delta tables. See Delta documentation for more details.
MERGE INTO
operations with schema evolution do not currently support writing CDC output.
Multi-column In predicates are not supported in the <operation>
condition.
Creating a bloom filer index on a nested column is currently unsupported: <columnName>
Nested field is not supported in the <operation>
(field = <fieldName>
).
The clone destination table is non-empty. Please TRUNCATE
or DELETE FROM
the table before running CLONE
.
Data source <dataSource>
does not support <mode>
output mode
Creating a bloom filter index on a partitioning column is unsupported: <columnName>
Column rename is not supported for your Delta table. <advice>
Delta does not support specifying the schema at read time.
SORTED BY
is not supported for Delta bucketed tables
<operation>
destination only supports Delta sources.
<plan>
Specifying static partitions in the partition spec is currently not supported during inserts
Unsupported strategy name: <strategy>
Subqueries are not supported in the <operation>
(condition = <cond>
).
Subquery is not supported in partition predicates.
Cannot specify time travel in multiple formats.
Cannot time travel views, subqueries, streams or change data feed queries.
Truncate sample tables is not supported
Unable to operate on this table because an unsupported type change was applied. Field <fieldName>
was changed from <fromType>
to <toType>
.
Please provide the base path (<baseDeltaPath>
) when Vacuuming Delta tables. Vacuuming specific partitions is currently not supported.
Table implementation does not support writes: <tableName>
You are trying to perform writes on a table which has been registered with the commit coordinator <coordinatorName>
. However, no implementation of this coordinator is available in the current environment and writes without coordinators are not allowed.
Write to sample tables is not supported
Cannot cast <fromCatalog>
to <toCatalog>
. All nested columns must match.
VACUUM
on data files succeeded, but COPY INTO
state garbage collection failed.
Versions (<versionList>
) are not contiguous.
For more details see DELTA_VERSIONS_NOT_CONTIGUOUS
CHECK
constraint <constraintName> <expression>
violated by row with values:
<values>
The validation of the properties of table <table>
has been violated:
For more details see DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED
<viewIdentifier>
is a view. You may not write data into a view.
Z-Ordering column <columnName>
does not exist in data schema.
Z-Ordering on <cols>
will be
ineffective, because we currently do not collect stats for these columns. Please refer to
<link>
for more information on data skipping and z-ordering. You can disable
this check by setting
'%%sql set <zorderColStatKey>
= false'
<colName>
is a partition column. Z-Ordering can only be performed on data columns
SQLSTATE: none assigned
Activation nonce not found. The activation link you used is invalid or has expired. Regenerate the activation link and try again.
SQLSTATE: none assigned
Sharing between <regionHint>
regions and regions outside of it is not supported.
SQLSTATE: none assigned
The view defined with the current_recipient
function is for sharing only and can only be queried from the data recipient side. The provided securable with id <securableId>
is not a Delta Sharing View.
SQLSTATE: none assigned
The provided securable kind <securableKind>
does not support mutability in Delta Sharing.
SQLSTATE: none assigned
The provided securable kind <securableKind>
does not support rotate token action initiated by Marketplace service.
SQLSTATE: none assigned
<dsError>
: Authentication type not available in provider entity <providerEntity>
.
SQLSTATE: none assigned
<dsError>
: Unable to access change data feed for <tableName>
. CDF is not enabled on the original delta table. Please contact your data provider.
SQLSTATE: none assigned
<dsError>
: Unable to access change data feed for <tableName>
. CDF is not shared on the table. Please contact your data provider.
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: Cloud vendor is temporarily unavailable for <rpcName>
, please retry.<traceId>
SQLSTATE: none assigned
<dsError>
: Data materialization task run <runId>
from org <orgId>
failed at command <command>
SQLSTATE: none assigned
<dsError>
: Data materialization task run <runId>
from org <orgId>
does not support command <command>
SQLSTATE: none assigned
<dsError>
: Could not find valid namespace to create materialization for <tableName>
. Please contact your data provider to fix this.
SQLSTATE: none assigned
<dsError>
: Data materialization task run <runId>
from org <orgId>
does not exist
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: Couldn't find all part files of the checkpoint at version: <version>
. <suggestion>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: The provided securable kind <securableKind>
does not support expire token action initiated by Marketplace service.
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <storage>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: Network connection is flaky for <rpcName>
, please retry.<traceId>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <key>
is not set by the caller.
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: Invalid Azure path: <path>
.
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: Query failed for <schema>
.<table>
from Share <share>
.
SQLSTATE: none assigned
<dsError>
: Query timed out for <schema>
.<table>
from Share <share>
after <timeoutInSec>
seconds.
SQLSTATE: none assigned
<dsError>
: Idempotency key is require when query <schema>
.<table>
from Share <share>
asynchronously.
SQLSTATE: none assigned
<dsError>
: Please only provide one of: <parameters>
.
SQLSTATE: none assigned
<dsError>
: No metastore assigned for the current workspace (workspaceId: <workspaceId>
).
SQLSTATE: none assigned
<dsError>
: Pagination or query arguments mismatch.
SQLSTATE: none assigned
<dsError>
: Partition column [<renamedColumns>
] renamed on the shared table. Please contact your data provider to fix this.
SQLSTATE: none assigned
<dsError>
: You can only query table data since version <startVersion>
.
SQLSTATE: none assigned
<dsError>
: A timeout occurred when processing <queryType>
on <tableName>
after <numActions>
updates across <numIter>
iterations.<progressUpdate> <suggestion> <traceId>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: The <resource>
exceeded limit: [<limitSize>
]<suggestion>
.<traceId>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
Cannot grant privileges on <securableType>
to system generated group <principal>
.
SQLSTATE: none assigned
<dsError>
: Time travel query is not permitted unless history is shared on <tableName>
. Please contact your data provider.
SQLSTATE: none assigned
<dsError>
: Unauthorized.
SQLSTATE: none assigned
<dsError>
: Unauthorized D2O OIDC Recipient: <message>
.
SQLSTATE: none assigned
<dsError>
: <traceId>
SQLSTATE: none assigned
<dsError>
: Unknown query id <queryID>
for <schema>
.<table>
from Share <share>
.
SQLSTATE: none assigned
<dsError>
: Unknown query status for query id <queryID>
for <schema>
.<table>
from Share <share>
.
SQLSTATE: none assigned
<dsError>
: Unknown rpc <rpcName>
.
SQLSTATE: none assigned
<dsError>
: Delta protocol reader version <tableReaderVersion>
is higher than <supportedReaderVersion>
and cannot be supported in the delta sharing server.
SQLSTATE: none assigned
<dsError>
: Table features <tableFeatures>
are found in table<versionStr> <historySharingStatusStr> <optionStr>
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: Unsupported storage scheme: <scheme>
.
SQLSTATE: none assigned
<dsError>
: Could not retrieve <schema>
.<table>
from Share <share>
because table with type [<tableType>
] is currently unsupported in Delta Sharing protocol.
SQLSTATE: none assigned
<dsError>
: <message>
SQLSTATE: none assigned
<dsError>
: The following function(s): <functions>
are not allowed in the view sharing query.
SQLSTATE: none assigned
<dsError>
: Workspace <workspaceId>
domain is not set.
SQLSTATE: none assigned
<dsError>
: Workspace <workspaceId>
was not found.
Schema evolution mode <addNewColumnsMode>
is not supported when the schema is specified. To use this mode, you can provide the schema through cloudFiles.schemaHints
instead.
Found notification-setup authentication options for the (default) directory
listing mode:
<options>
If you wish to use the file notification mode, please explicitly set:
.option("cloudFiles.<useNotificationsKey>
", "true")
Alternatively, if you want to skip the validation of your options and ignore these
authentication options, you can set:
.option("cloudFiles.ValidateOptionsKey>", "false")
Incremental listing mode (cloudFiles.<useIncrementalListingKey>
)
and file notification (cloudFiles.<useNotificationsKey>
)
have been enabled at the same time.
Please make sure that you select only one.
Require adlsBlobSuffix and adlsDfsSuffix for Azure
The <storeType>
in the file event <fileEvent>
is different from expected by the source: <source>
.
Cannot evolve schema when the schema log is empty. Schema log location: <logPath>
Cannot parse the following queue message: <message>
Cannot resolve container name from path: <path>
, Resolved uri: <uri>
Cannot run directory listing when there is an async backfill thread running
Cannot turn on cloudFiles.cleanSource and cloudFiles.allowOverwrites at the same time.
Auto Loader cannot delete processed files because it does not have write permissions to the source directory.
<reason>
To fix you can either:
- Grant write permissions to the source directory OR
- Set cleanSource to 'OFF'
You could also unblock your stream by setting the SQLConf spark.databricks.cloudFiles.cleanSource.disabledDueToAuthorizationErrors to 'true'.
There was an error when trying to infer the partition schema of your table. You have the same column duplicated in your data and partition paths. To ignore the partition value, please provide your partition columns explicitly by using: .option("cloudFiles.<partitionColumnsKey>
", "{comma-separated-list}")
Cannot infer schema when the input path <path>
is empty. Please try to start the stream when there are files in the input path, or specify the schema.
Failed to create an Event Grid subscription. Please make sure that your service
principal has <permissionType>
Event Grid Subscriptions. See more details at:
<docLink>
Failed to create event grid subscription. Please ensure that Microsoft.EventGrid is
registered as resource provider in your subscription. See more details at:
<docLink>
Failed to create an Event Grid subscription. Please make sure that your storage
account (<storageAccount>
) is under your resource group (<resourceGroup>
) and that
the storage account is a "StorageV2 (general purpose v2)" account. See more details at:
<docLink>
Auto Loader event notification mode is not supported for <cloudStore>
.
Failed to check if the stream is new
Failed to create subscription: <subscriptionName>
. A subscription with the same name already exists and is associated with another topic: <otherTopicName>
. The desired topic is <proposedTopicName>
. Either delete the existing subscription or create a subscription with a new resource suffix.
Failed to create topic: <topicName>
. A topic with the same name already exists.<reason>
Remove the existing topic or try again with another resource suffix
Failed to delete notification with id <notificationId>
on bucket <bucketName>
for topic <topicName>
. Please retry or manually remove the notification through the GCP console.
Failed to deserialize persisted schema from string: '<jsonSchema>
'
Cannot evolve schema without a schema log.
Failed to find provider for <fileFormatInput>
Failed to infer schema for format <fileFormatInput>
from existing files in input path <path>
.
For more details see CF_FAILED_TO_INFER_SCHEMA
Failed to write to the schema log at location <path>
.
Could not find required option: cloudFiles.format.
Found multiple (<num>
) subscriptions with the Auto Loader prefix for topic <topicName>
:
<subscriptionList>
There should only be one subscription per topic. Please manually ensure that your topic does not have multiple subscriptions.
Please either provide all of the following: <clientEmail>
, <client>
,
<privateKey>
, and <privateKeyId>
or provide none of them in order to use the default
GCP credential provider chain for authenticating with GCP resources.
Received too many labels (<num>
) for GCP resource. The maximum label count per resource is <maxNum>
.
Received too many resource tags (<num>
) for GCP resource. The maximum resource tag count per resource is <maxNum>
, as resource tags are stored as GCP labels on resources, and Databricks specific tags consume some of this label quota.
Incomplete log file in the schema log at path <path>
Incomplete metadata file in the Auto Loader checkpoint
CloudFiles is a streaming source. Please use spark.readStream instead of spark.read. To disable this check, set <cloudFilesFormatValidationEnabled>
to false.
The cloud_files method accepts two required string parameters: the path to load from, and the file format. File reader options must be provided in a string key-value map. e.g. cloud_files("path", "json", map("option1", "value1")). Received: <params>
To use 'cloudFiles' as a streaming source, please provide the file format with the option 'cloudFiles.format', and use .load() to create your DataFrame. To disable this check, set <cloudFilesFormatValidationEnabled>
to false.
Internal error.
For more details see CF_INTERNAL_ERROR
Invalid ARN: <arn>
The private key provided with option cloudFiles.certificate cannot be parsed. Please provide a valid public key in PEM format.
The private key provided with option cloudFiles.certificatePrivateKey cannot be parsed. Please provide a valid private key in PEM format.
This checkpoint is not a valid CloudFiles source
Invalid mode for clean source option <value>
.
Invalid resource tag key for GCP resource: <key>
. Keys must start with a lowercase letter, be within 1 to 63 characters long, and contain only lowercase letters, numbers, underscores (_), and hyphens (-).
Invalid resource tag value for GCP resource: <value>
. Values must be within 0 to 63 characters long and must contain only lowercase letters, numbers, underscores (_), and hyphens (-).
Auto Loader does not support the following options when used with managed file events:
<optionList>
We recommend that you remove these options and then restart the stream.
Invalid response from managed file events service. Please contact Databricks support for assistance.
For more details see CF_INVALID_MANAGED_FILE_EVENTS_RESPONSE
cloudFiles.<schemaEvolutionModeKey>
must be one of {
"<addNewColumns>
"
"<failOnNewColumns>
"
"<rescue>
"
"<noEvolution>
"}
Schema hints can only specify a particular column once.
In this case, redefining column: <columnName>
multiple times in schemaHints:
<schemaHints>
Schema hints can not be used to override maps' and arrays' nested types.
Conflicted column: <columnName>
latestOffset should be called with a ReadLimit on this source.
Log file was malformed: failed to read correct log version from <fileName>
.
You have requested Auto Loader to ignore existing files in your external location by setting includeExistingFiles to false. However, the managed file events service is still discovering existing files in your external location. Please try again after managed file events has completed discovering all files in your external location.
You are using Auto Loader with managed file events, but it appears that the external location for your input path '<path>
' does not have file events enabled or the input path is invalid. Please request your Databricks Administrator to enable file events on the external location for your input path.
You are using Auto Loader with managed file events, but you do not have access to the external location or volume for input path '<path>
' or the input path is invalid. Please request your Databricks Administrator to grant read permissions for the external location or volume or provide a valid input path within an existing external location or volume.
Auto Loader with managed file events is only available on Databricks serverless. To continue, please move this workload to Databricks serverless or turn off the cloudFiles.useManagedFileEvents option.
max must be positive
Multiple streaming queries are concurrently using <metadataFile>
The metadata file in the streaming source checkpoint directory is missing. This metadata
file contains important default options for the stream, so the stream cannot be restarted
right now. Please contact Databricks support for assistance.
Partition column <columnName>
does not exist in the provided schema:
<schema>
Please specify a schema using .schema() if a path is not provided to the CloudFiles source while using file notification mode. Alternatively, to have Auto Loader to infer the schema please provide a base path in .load().
Found existing notifications for topic <topicName>
on bucket <bucketName>
:
notification,id
<notificationList>
To avoid polluting the subscriber with unintended events, please delete the above notifications and retry.
New partition columns were inferred from your files: [<filesList>
]. Please provide all partition columns in your schema or provide a list of partition columns which you would like to extract values for by using: .option("cloudFiles.partitionColumns", "{comma-separated-list|empty-string}")
There was an error when trying to infer the partition schema of the current batch of files. Please provide your partition columns explicitly by using: .option("cloudFiles.<partitionColumnOption>
", "{comma-separated-list}")
Cannot read files when the input path <path>
does not exist. Please make sure the input path exists and re-try.
Periodic backfill is not supported if asynchronous backfill is disabled. You can enable asynchronous backfill/directory listing by setting spark.databricks.cloudFiles.asyncDirListing
to true
Found mismatched event: key <key>
doesn't have the prefix: <prefix>
<message>
If you don't need to make any other changes to your code, then please set the SQL
configuration: '<sourceProtocolVersionKey> = <value>
'
to resume your stream. Please refer to:
<docLink>
for more details.
Could not get default AWS Region. Please specify a region using the cloudFiles.region option.
Failed to create notification services: the resource suffix cannot be empty.
Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-) and underscores (_).
Failed to create notification services: the resource suffix can only have lowercase letter, number, and dash (-).
Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-), underscores (_), periods (.), tildes (~) plus signs (+), and percent signs (<percentSign>
).
Failed to create notification services: the resource suffix cannot have more than <limit>
characters.
Failed to create notification services: the resource suffix must be between <lowerLimit>
and <upperLimit>
characters.
Found restricted GCP resource tag key (<key>
). The following GCP resource tag keys are restricted for Auto Loader: [<restrictedKeys>
]
cloudFiles.cleanSource.retentionDuration cannot be greater than cloudFiles.maxFileAge.
Failed to create notification for topic: <topic>
with prefix: <prefix>
. There is already a topic with the same name with another prefix: <oldPrefix>
. Try using a different resource suffix for setup or delete the existing setup.
Please provide the source directory path with option path
The cloud files source only supports S3, Azure Blob Storage (wasb/wasbs) and Azure Data Lake Gen1 (adl) and Gen2 (abfs/abfss) paths right now. path: '<path>
', resolved uri: '<uri>
'
The cloud_files_state function accepts a string parameter representing the checkpoint directory of a cloudFiles stream or a multi-part tableName identifying a streaming table, and an optional second integer parameter representing the checkpoint version to load state for. The second parameter may also be 'latest' to read the latest checkpoint. Received: <params>
The input checkpoint path <path>
is invalid. Either the path does not exist or there are no cloud_files sources found.
The specified version <version>
does not exist, or was removed during analysis.
<threadName>
thread is dead.
Unable to derive the stream checkpoint location from the source checkpoint location: <checkPointLocation>
Unable to detect the source file format from <fileSize>
sampled file(s), found <formats>
. Please specify the format.
Unable to extract bucket information. Path: '<path>
', resolved uri: '<uri>
'.
Unable to extract key information. Path: '<path>
', resolved uri: '<uri>
'.
Unable to extract storage account information; path: '<path>
', resolved uri: '<uri>
'
Received a directory rename event for the path <path>
, but we are unable to list this directory efficiently. In order for the stream to continue, set the option 'cloudFiles.ignoreDirRenames' to true, and consider enabling regular backfills with cloudFiles.backfillInterval for this data to be processed.
Unexpected ReadLimit: <readLimit>
Found unknown option keys:
<optionList>
Please make sure that all provided option keys are correct. If you want to skip the
validation of your options and ignore these unknown options, you can set:
.option("cloudFiles.<validateOptions>
", "false")
Unknown ReadLimit: <readLimit>
The SQL function 'cloud_files' to create an Auto Loader streaming source is supported only in a Delta Live Tables pipeline. See more details at:
<docLink>
Schema inference is not supported for format: <format>
. Please specify the schema.
UnsupportedLogVersion: maximum supported log version is v<maxVersion>``, but encountered v``<version>
. The log file was produced by a newer version of DBR and cannot be read by this version. Please upgrade.
Schema evolution mode <mode>
is not supported for format: <format>
. Please set the schema evolution mode to 'none'.
Reading from a Delta table is not supported with this syntax. If you would like to consume data from Delta, please refer to the docs: read a Delta table (<deltaDocLink>
), or read a Delta table as a stream source (<streamDeltaDocLink>
). The streaming source from Delta is already optimized for incremental consumption of data.
Error parsing EWKB: <parseError>
at position <pos>
Error parsing GeoJSON: <parseError>
at position <pos>
For more details see GEOJSON_PARSE_ERROR
<h3Cell>
is not a valid H3 cell ID
For more details see H3_INVALID_CELL_ID
H3 grid distance <k>
must be non-negative
For more details see H3_INVALID_GRID_DISTANCE_VALUE
H3 resolution <r>
must be between <minR>
and <maxR>
, inclusive
For more details see H3_INVALID_RESOLUTION_VALUE
<h3Expression>
is disabled or unsupported. Consider enabling Photon or switch to a tier that supports H3 expressions
For more details see H3_NOT_ENABLED
A pentagon was encountered while computing the hex ring of <h3Cell>
with grid distance <k>
H3 grid distance between <h3Cell1>
and <h3Cell2>
is undefined
Arguments to "<sqlFunction>
" must have the same SRID value. SRID values found: <srid1>
, <srid2>
"<sqlFunction>
": <reason>
Argument to "<sqlFunction>
" must be of type <validTypes>
<sqlFunction>
: Invalid or unsupported CRS transformation from SRID <srcSrid>
to SRID <trgSrid>
Endianness '<e>
' must be either 'NDR' (little-endian) or 'XDR' (big-endian)
<sqlFunction>
: Invalid geohash value: '<geohash>
'. Geohash values must be valid lowercase base32 strings.
Precision <p>
must be between <minP>
and <maxP>
, inclusive
Invalid or unsupported SRID <srid>
<stExpression>
is disabled or unsupported. Consider enabling Photon or switch to a tier that supports ST expressions
The GEOGRAPHY
and GEOMETRY
data types cannot be returned in queries. Use one of the following SQL expressions to convert them to standard interchange formats: <projectionExprs>
.
Error parsing WKB: <parseError>
at position <pos>
For more details see WKB_PARSE_ERROR
Error parsing WKT: <parseError>
at position <pos>
For more details see WKT_PARSE_ERROR
Column <columnName>
conflicts with another column with the same name but with/without trailing whitespaces (for example, an existing column named <columnName>
``). Please rename the column with a different name.
SQLSTATE: none assigned
Invalid request to get connection-level credentials for connection of type <connectionType>
. Such credentials are only available for managed PostgreSQL connections.
SQLSTATE: none assigned
Connection type '<connectionType>
' is not enabled. Please enable the connection to use it.
SQLSTATE: none assigned
There is already a Recipient object '<existingRecipientName>
' with the same sharing identifier '<existingMetastoreId>
'.
SQLSTATE: none assigned
Data of a Delta Sharing Securable Kind <securableKindName>
are read-only and can not be created, modified or deleted.
SQLSTATE: none assigned
Credential vending is rejected for non Databricks Compute environment due to External Data Access being disabled for metastore <metastoreName>
. Please contact your metastore admin to enable 'External Data Access' configuration on the metastore.
SQLSTATE: none assigned
Table with id <tableId>
cannot be accessed from outside of Databricks Compute Environment due to its kind being <securableKind>
. Only 'TABLE_EXTERNAL
', 'TABLE_DELTA_EXTERNAL
' and 'TABLE_DELTA
' table kinds can be accessed externally.
SQLSTATE: none assigned
Privilege EXTERNAL
USE SCHEMA
is not applicable to this entity <assignedSecurableType>
and can only be assigned to a schema or catalog. Please remove the privilege from the <assignedSecurableType>
object and assign it to a schema or catalog instead.
SQLSTATE: none assigned
Table with id <tableId>
cannot be written from outside of Databricks Compute Environment due to its kind being <securableKind>
. Only 'TABLE_EXTERNAL
' and 'TABLE_DELTA_EXTERNAL
' table kinds can be written externally.
SQLSTATE: none assigned
The storage location for a foreign catalog of type <catalogType>
will be used for unloading data and can not be read-only.
SQLSTATE: none assigned
The number of <resourceType>
s for input path <url>
exceeds the allowed limit (<overlapLimit>
) for overlapping HMS <resourceType>
s.
SQLSTATE: none assigned
Delta Sharing requests are not supported using resource names
SQLSTATE: none assigned
The provided resource name references entity type <provided>
but expected <expected>
SQLSTATE: none assigned
The provided resource name references a metastore that is not in scope for the current request
SQLSTATE: none assigned
Input path url '<path>
' overlaps with <overlappingLocation>
within '<caller>
' call. <conflictingSummary>
.
SQLSTATE: none assigned
The storage root for Redshift foreign catalog has to be AWS S3.
SQLSTATE: none assigned
Securable with kind <securableKind>
does not support Lakehouse Federation.
SQLSTATE: none assigned
Securable kind '<securableKind>
' is not enabled. If this is a securable kind associated with a preview feature, please enable it in workspace settings.
SQLSTATE: none assigned
Securable with type <securableType>
does not support Lakehouse Federation.
SQLSTATE: none assigned
The source table has more than <columnCount>
columns. Please reduce the number of columns to <columnLimitation>
or fewer.
SQLSTATE: none assigned
Exchanged AAD token lifetime is <lifetime>
which is configured too short. Please check your Azure AD setting to make sure temporary access token has at least an hour lifetime.https://docs.azure.cn/active-directory/develop/active-directory-configurable-token-lifetimes
SQLSTATE: none assigned
Authorizing <actionName>
is not supported; please check that the RPC invoked is implemented for this resource type
SQLSTATE: none assigned
Cannot create a connection for a builtin hive metastore because user: <userId>
is not the admin of the workspace: <workspaceId>
SQLSTATE: none assigned
Attempt to modify a restricted field in built-in HMS connection '<connectionName>
'. Only 'warehouse_directory' can be updated.
SQLSTATE: none assigned
Failed to rename table column <originalLogicalColumn>
because it's used for partition filtering in <sharedTableName>
. In order to proceed, you can remove the table from the share, rename the column, and share it with the desired partition filtering columns again. Though, this may break the streaming query for your recipient.
SQLSTATE: none assigned
Cannot create <securableType>
'<securable>
' under <parentSecurableType>
'<parentSecurable>
' because the request is not from a UC cluster.
SQLSTATE: none assigned
Failed to access cloud storage: <errMsg>
exceptionTraceId=<exceptionTraceId>
SQLSTATE: none assigned
Cannot create a connection with both username/password and oauth authentication options. Please choose one.
SQLSTATE: none assigned
Credential '<credentialName>
' has one or more dependent connections. You may use force option to continue to update or delete the credential, but the connections using this credential may not work anymore.
SQLSTATE: none assigned
The refresh token associated with the connection is expired. Please update the connection to restart the OAuth flow to retrieve a fresh token.
SQLSTATE: none assigned
The connection is in the FAILED
state. Please update the connection with valid credentials to reactivate it.
SQLSTATE: none assigned
There is no refresh token associated with the connection. Please update the OAuth client integration in your identity provider to return refresh tokens, and update or recreate the connection to restart the OAuth flow and retrieve the necessary tokens.
SQLSTATE: none assigned
The OAuth token exchange failed with HTTP status code <httpStatus>
. The returned server response or exception message is: <response>
SQLSTATE: none assigned
Supports for coordinated commits is not enabled. Please contact Databricks support.
SQLSTATE: none assigned
Cannot create <securableType>
'<securableName>
' because it is under a <parentSecurableType>
'<parentSecurableName>
' that is not active. Please delete the parent securable and recreate the parent.
SQLSTATE: none assigned
Failed to parse the provided access connector ID: <accessConnectorId>
. Please verify its formatting and try again.
SQLSTATE: none assigned
Failed to obtain an AAD token to perform cloud permission validation on an access connector. Please retry the action.
SQLSTATE: none assigned
Registering a credential requires the contributor role over the corresponding access connector with ID <accessConnectorId>
. Please contact your account admin.
SQLSTATE: none assigned
Credential type '<credentialType>
' is not supported for purpose '<credentialPurpose>
'
SQLSTATE: none assigned
Only the account admin can create or update a credential with type <storageCredentialType>
.
SQLSTATE: none assigned
The trust policy of the IAM role to allow Databricks Account to assume the role should require an external id. Please contact your account admin to add the external id condition. This behavior is to guard against the Confused Deputy problem https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).
SQLSTATE: none assigned
Service principals cannot use the CREATE_STORAGE_CREDENTIAL
privilege to register managed identities. To register a managed identity, please assign the service principal the account admin role.
SQLSTATE: none assigned
Creating or updating a credential as a non-account admin is not supported in the account-level API. Please use the workspace-level API instead.
SQLSTATE: none assigned
Unable to parse Iceberg table version from metadata location <metadataLocation>
.
SQLSTATE: none assigned
A concurrent update to the same iceberg metadata version was detected.
SQLSTATE: none assigned
The committed metadata location <metadataLocation>
is invalid. It is not a subdirectory of the table's root directory <tableRoot>
.
SQLSTATE: none assigned
The provided delta iceberg format conversion information is missing required fields.
SQLSTATE: none assigned
Setting delta iceberg format information on create is unsupported.
SQLSTATE: none assigned
The provided delta iceberg format conversion information is too large.
SQLSTATE: none assigned
Uniform metadata can only be updated on Delta tables with uniform enabled.
SQLSTATE: none assigned
<resourceType>
'<ref>
' depth exceeds limit (or has a circular reference).
SQLSTATE: none assigned
<resourceType>
'<ref>
' is invalid because one of the underlying resources does not exist. <cause>
SQLSTATE: none assigned
<resourceType>
'<ref>
' does not have sufficient privilege to execute because the owner of one of the underlying resources failed an authorization check. <cause>
SQLSTATE: none assigned
A connection: '<connectionName>
' with the URL '<url>
' already exists.
SQLSTATE: none assigned
Attempted to create a Fabric catalog with url '<storageLocation>
' that matches an existing catalog, which is not allowed.
SQLSTATE: none assigned
Tag assignment with tag key <tagKey>
already exists
SQLSTATE: none assigned
Entity <securableType> <entityId>
does not have a corresponding online cluster.
SQLSTATE: none assigned
There are more than <maxFileResults>
files. Please specify [max_results] to limit the number of files returned.
SQLSTATE: none assigned
Cannot <opName> <extLoc> <reason>
. <suggestion>
.
SQLSTATE: none assigned
<featureName>
is currently disabled in UC.
SQLSTATE: none assigned
Creation of a foreign catalog for connection type '<connectionType>
' is not supported. This connection type can only be used to create managed ingestion pipelines. Please reference Databricks documentation for more information.
SQLSTATE: none assigned
Only READ credentials can be retrieved for foreign tables.
SQLSTATE: none assigned
Foreign key <constraintName>
child columns and parent columns are of different sizes.
SQLSTATE: none assigned
The foreign key parent columns do not match the referenced primary key child columns. Foreign key parent columns are (<parentColumns>
) and primary key child columns are (<primaryKeyChildColumns>
).
SQLSTATE: none assigned
The foreign key child column type does not match the parent column type. Foreign key child column <childColumnName>
has type <childColumnType>
and parent column <parentColumnName>
has type <parentColumnType>
.
SQLSTATE: none assigned
Access denied. Cause: service account private key is invalid.
SQLSTATE: none assigned
Google Server Account OAuth Private Key has to be a valid JSON object with required fields, please make sure to provide the full JSON file generated from 'KEYS' section of service account details page.
SQLSTATE: none assigned
Google Server Account OAuth Private Key has to be a valid JSON object with required fields, please make sure to provide the full JSON file generated from 'KEYS' section of service account details page. Missing fields are <missingFields>
SQLSTATE: none assigned
The IAM role for this storage credential was found to be non self-assuming. Please check your role's trust and IAM policies to ensure that your IAM role can assume itself according to the Unity Catalog storage credential documentation.
SQLSTATE: none assigned
Cannot commit <tableName>
: metadata location <baseMetadataLocation>
has changed from <catalogMetadataLocation>
.
SQLSTATE: none assigned
Cannot perform Managed Iceberg commit to a non Managed Iceberg table: <tableName>
.
SQLSTATE: none assigned
The provided Managed Iceberg commit information is missing required fields.
SQLSTATE: none assigned
The <type> <name>
does not have ID <wrongId>
. Please retry the operation.
SQLSTATE: none assigned
Invalid access of <securableType> <securableName>
in the federated catalog <catalogName>
. <reason>
SQLSTATE: none assigned
Invalid Cloudflare account ID.
SQLSTATE: none assigned
Invalid credential cloud provider '<cloud>
'. Allowed cloud provider '<allowedCloud>
'.
SQLSTATE: none assigned
Invalid value '<value>
' for credential's 'purpose'. Allowed values '<allowedValues>
'.
SQLSTATE: none assigned
Cannot update a connection from <startingCredentialType>
to <endingCredentialType>
. The only valid transition is from a username/password based connection to an OAuth token based connection.
SQLSTATE: none assigned
Invalid cron string. Found: '<cronString>
' with parse exception: '<message>
'
SQLSTATE: none assigned
Invalid direct access managed table <tableName>
. Make sure source table & pipeline definition are not defined.
SQLSTATE: none assigned
Unexpected empty storage location for <securableType>
'<securableName>
' in catalog '<catalogName>
'. In order to fix this error, please run DESCRIBE SCHEMA <catalogName>
.<securableName>
and refresh this page.
SQLSTATE: none assigned
Invalid options provided for update. Invalid options: <invalidOptions>
. Allowed options: <allowedOptions>
.
SQLSTATE: none assigned
Invalid value '<value>
' for '<option>
'. Allowed values '<allowedValues>
'.
SQLSTATE: none assigned
'<option>
' cannot be empty. Please enter a non-empty value.
SQLSTATE: none assigned
Invalid R2 access key ID.
SQLSTATE: none assigned
Invalid R2 secret access key.
SQLSTATE: none assigned
Invalid condition in rule '<ruleName>
'. Compilation error with message '<message>
'.
SQLSTATE: none assigned
Cannot update <securableType>
'<securableName>
' as it's owned by an internal group. Please contact Databricks support for additional details.
SQLSTATE: none assigned
Provided Storage Credential <storageCredentialName>
is not associated with DBFS Root, creation of wasbs External Location is prohibited.
SQLSTATE: none assigned
Storage location has invalid URI scheme: <scheme>
.
SQLSTATE: none assigned
The response from the token server was missing the field <missingField>
. The returned server response is: <response>
SQLSTATE: none assigned
'<metastoreAssignmentStatus>
' cannot be assigned. Only MANUALLY_ASSIGNABLE
and AUTO_ASSIGNMENT_ENABLED
are supported.
SQLSTATE: none assigned
Metastore certification is not enabled.
SQLSTATE: none assigned
Failed to retrieve a metastore to database shard mapping for Metastore ID <metastoreId>
due to an internal error. Please contact Databricks support.
SQLSTATE: none assigned
The metastore <metastoreId>
has <numberManagedOnlineCatalogs>
managed online catalog(s). Please explicitly delete them, then retry the metastore deletion.
SQLSTATE: none assigned
Metastore root credential cannot be defined when updating the metastore root location. The credential will be fetched from the metastore parent external location.
SQLSTATE: none assigned
Deletion of metastore storage root location failed. <reason>
SQLSTATE: none assigned
The root <securableType>
for a metastore cannot be read-only.
SQLSTATE: none assigned
Metastore storage root cannot be updated once it is set.
SQLSTATE: none assigned
Cannot generate temporary '<opName>
' credentials for model version <modelVersion>
with status <modelVersionStatus>
. '<opName>
' credentials can only be generated for model versions with status <validStatus>
SQLSTATE: none assigned
Attempted to access org ID (or workspace ID), but context has none.
SQLSTATE: none assigned
The <rpcName>
request updates <fieldName>
. Use the online store compute tab to modify anything other than comment, owner and isolationMode of an online catalog.
SQLSTATE: none assigned
Cannot create more than <quota>
online stores in the metastore and there is already <currentCount>
. You may not have access to any existing online stores. Contact your metastore admin to be granted access or for further instructions.
SQLSTATE: none assigned
online index catalogs must be <action>
via the /vector-search API.
SQLSTATE: none assigned
The <rpcName>
request updates <fieldName>
. Use the /vector-search API to modify anything other than comment, owner and isolationMode of an online index catalog.
SQLSTATE: none assigned
Cannot create more than <quota>
online index catalogs in the metastore and there is already <currentCount>
. You may not have access to any existing online index catalogs. Contact your metastore admin to be granted access or for further instructions.
SQLSTATE: none assigned
online indexes must be <action>
via the /vector-search API.
SQLSTATE: none assigned
online stores must be <action>
via the online store compute tab.
SQLSTATE: none assigned
The source table column name <columnName>
is too long. The maximum length is <maxLength>
characters.
SQLSTATE: none assigned
Column <columnName>
cannot be used as a primary key column of the online table because it is not part of the existing PRIMARY KEY
constraint of the source table. For details, please see <docLink>
SQLSTATE: none assigned
Column <columnName>
cannot be used as a timeseries key of the online table because it is not a timeseries column of the existing PRIMARY KEY
constraint of the source table. For details, please see <docLink>
SQLSTATE: none assigned
Cannot create more than <quota>
online table(s) per source table.
SQLSTATE: none assigned
Accessing resource <resourceName>
requires use of a Serverless SQL warehouse. Please ensure the warehouse being used to execute a query or view a database catalog in the UI is serverless. For details, please see <docLink>
SQLSTATE: none assigned
Cannot create more than <quota>
continuous online views in the online store, and there is already <currentCount>
. You may not have access to any existing online views. Contact your online store admin to be granted access or for further instructions.
SQLSTATE: none assigned
<tableKind>
can not be created under storage location with Databricks Managed Keys. Please choose a different schema/catalog in a storage location without Databricks Managed Keys encryption.
SQLSTATE: none assigned
Invalid catalog <catalogName>
with kind <catalogKind>
to create <tableKind>
within. <tableKind>
can only be created under catalogs of kinds: <validCatalogKinds>
.
SQLSTATE: none assigned
Invalid schema <schemaName>
with kind <schemaKind>
to create <tableKind>
within. <tableKind>
can only be created under schemas of kinds: <validSchemaKinds>
.
SQLSTATE: none assigned
Column <columnName>
of type <columnType>
cannot be used as a TTL time column. Allowed types are <supportedTypes>
.
SQLSTATE: none assigned
Authorized Path Error. The <securableType>
location <location>
is not defined within the authorized paths for catalog: <catalogName>
.
SQLSTATE: none assigned
The 'authorized_paths' option contains overlapping paths: <overlappingPaths>
. Ensure each path is unique and does not intersect with others in the list.
SQLSTATE: none assigned
The query argument '<arg>
' is set to '<received>
' which is different to the value used in the first pagination call ('<expected>
')
SQLSTATE: none assigned
Too many requests to database from metastore <metastoreId>
. Please try again later.
SQLSTATE: none assigned
Cannot create the primary key <constraintName>
because its child column(s) <childColumnNames>
is nullable. Please change the column nullability and retry.
SQLSTATE: none assigned
This operation took too long.
SQLSTATE: none assigned
Root storage S3 bucket name containing dots is not supported by Unity Catalog: <uri>
SQLSTATE: none assigned
Unexpected empty storage location for schema '<schemaName>
' in catalog '<catalogName>
'. Please make sure the schema uses a path scheme of <validPathSchemesListStr>
.
SQLSTATE: none assigned
We're experiencing a temporary issue while processing your request. Please try again in a few moments. If the problem persists, please reach out to support.
SQLSTATE: none assigned
Failed to parse the provided access connector ID: <accessConnectorId>
. Please verify its formatting and try again.
SQLSTATE: none assigned
Cannot create a storage credential for DBFS root because user: <userId>
is not the admin of the workspace: <workspaceId>
SQLSTATE: none assigned
Location <location>
is not inside the DBFS root <dbfsRootLocation>
SQLSTATE: none assigned
DBFS root storage credential is not yet supported for workspaces with Firewall-enabled DBFS
SQLSTATE: none assigned
DBFS root storage credential for current workspace is not yet supported
SQLSTATE: none assigned
DBFS root is not enabled for workspace <workspaceId>
SQLSTATE: none assigned
Failed to obtain an AAD token to perform cloud permission validation on an access connector. Please retry the action.
SQLSTATE: none assigned
Registering a storage credential requires the contributor role over the corresponding access connector with ID <accessConnectorId>
. Please contact your account admin.
SQLSTATE: none assigned
Only the account admin can create or update a storage credential with type <storageCredentialType>
.
SQLSTATE: none assigned
Missing validation token for service principal. Please provide a valid ARM-scoped Entra ID token in the 'X-Databricks-Azure-SP-Management-Token' request header and retry. For details, checkhttps://docs.databricks.com/api/workspace/storagecredentials
SQLSTATE: none assigned
The trust policy of the IAM role to allow Databricks Account to assume the role should require an external id. Please contact your account admin to add the external id condition. This behavior is to guard against the Confused Deputy problem https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).
SQLSTATE: none assigned
Service principals cannot use the CREATE_STORAGE_CREDENTIAL
privilege to register managed identities. To register a managed identity, please assign the service principal the account admin role.
SQLSTATE: none assigned
Location <location>
is not inside the DBFS root, because of that we can't create an storage credential <storageCredentialName>
SQLSTATE: none assigned
Creating or updating a storage credential as a non-account admin is not supported in the account-level API. Please use the workspace-level API instead.
SQLSTATE: none assigned
Cannot grant privileges on <securableType>
to system generated group <principal>
.
SQLSTATE: none assigned
Tag assignment with tag key <tagKey>
does not exist
SQLSTATE: none assigned
Invalid base path provided, base path should be something like /api/resources/v1. Unsupported path: <path>
SQLSTATE: none assigned
Invalid host name provided, host name should be something likehttps://www.databricks.com without path suffix. Unsupported host: <host>
SQLSTATE: none assigned
Only basic Latin/Latin-1 ASCII
chars are supported in external location/volume/table paths. Unsupported path: <path>
SQLSTATE: none assigned
Cannot update <securableType>
'<securableName>
' because it is being provisioned.
SQLSTATE: none assigned
The <type> <name>
has been modified by another request. Please retry the operation.
SQLSTATE: none assigned
Request to perform commit/getCommits for table '<tableId>
' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
SQLSTATE: none assigned
Request to create staging table '<tableFullName>
' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
SQLSTATE: none assigned
Request to create non-external table '<tableFullName>
' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
SQLSTATE: none assigned
Request to generate access credential for path '<path>
' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
SQLSTATE: none assigned
Request to generate access credential for table '<tableId>
' from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
SQLSTATE: none assigned
Request to get foreign credentials for securables from outside of Databricks Unity Catalog enabled compute environment is denied for security.
SQLSTATE: none assigned
Request to update metadata snapshots from outside of Databricks Unity Catalog enabled compute environment is denied for security.
SQLSTATE: none assigned
Invalid request to get write credentials for managed online table in an online catalog.
SQLSTATE: none assigned
<api_name> API is not enabled
SQLSTATE: none assigned
Requested method of Files API is not supported for cloud paths
SQLSTATE: none assigned
Access to the storage bucket is denied by AWS.
SQLSTATE: none assigned
All access to the storage bucket has been disabled in AWS.
SQLSTATE: none assigned
The storage bucket does not exist in AWS.
SQLSTATE: none assigned
Access to the storage bucket is forbidden by AWS.
SQLSTATE: none assigned
The workspace is misconfigured: it must be in the same region as the AWS workspace root storage bucket.
SQLSTATE: none assigned
The storage bucket name is invalid.
SQLSTATE: none assigned
The configured KMS keys to access the storage bucket are disabled in AWS.
SQLSTATE: none assigned
Access to AWS resource is unauthorized.
SQLSTATE: none assigned
The storage account is disabled in Azure.
SQLSTATE: none assigned
The Azure container does not exist.
SQLSTATE: none assigned
Access to the storage container is forbidden by Azure.
SQLSTATE: none assigned
Azure responded that there is currently a lease on the resource. Try again later.
SQLSTATE: none assigned
The account being accessed does not have sufficient permissions to execute this operation.
SQLSTATE: none assigned
Cannot access storage account in Azure: invalid storage account name.
SQLSTATE: none assigned
The key vault vault is not found in Azure. Check your customer-managed keys settings.
SQLSTATE: none assigned
The Azure key vault key is not found in Azure. Check your customer-managed keys settings.
SQLSTATE: none assigned
The key vault vault is not found in Azure. Check your customer-managed keys settings.
SQLSTATE: none assigned
Azure Managed Identity Credential with Access Connector not found. This could be because IP access controls rejected your request.
SQLSTATE: none assigned
The requested path is not valid for Azure.
SQLSTATE: none assigned
The requested path is immutable.
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
<message>
SQLSTATE: none assigned
the ':' character is not supported in paths
SQLSTATE: none assigned
Databricks Control plane network zone not allowed.
SQLSTATE: none assigned
A body was provided but directories cannot have a file body
SQLSTATE: none assigned
The directory is not empty. This operation is not supported on non-empty directories.
SQLSTATE: none assigned
The directory being accessed is not found.
SQLSTATE: none assigned
The request contained multiple copies of a header that is only allowed once.
SQLSTATE: none assigned
Query parameter '<parameter_name>' must be present exactly once but was provided multiple times.
SQLSTATE: none assigned
The DBFS bucket name is empty.
SQLSTATE: none assigned
expiration time must be present
SQLSTATE: none assigned
ExpireTime must be in the future
SQLSTATE: none assigned
Requested TTL is longer than supported (1 hour)
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
The file being created already exists.
SQLSTATE: none assigned
The file being accessed is not found.
SQLSTATE: none assigned
Files or directories ending in the '.' character are not supported.
SQLSTATE: none assigned
File size shouldn't exceed <max_download_size_in_bytes> bytes, but <size_in_bytes> bytes were found.
SQLSTATE: none assigned
Access to the storage bucket has been disabled in GCP.
SQLSTATE: none assigned
The storage bucket does not exist in GCP.
SQLSTATE: none assigned
Access to the bucket is forbidden by GCP.
SQLSTATE: none assigned
The customer-managed encryption key configured for that location is either disabled or destroyed.
SQLSTATE: none assigned
The GCP requests to the bucket are prohibited by policy, check the VPC service controls.
SQLSTATE: none assigned
Cloud provider host is temporarily not available; please try again later.
SQLSTATE: none assigned
The provided page token is not valid.
SQLSTATE: none assigned
invalid page token
SQLSTATE: none assigned
Invalid path: <validation_error>
SQLSTATE: none assigned
The range header is invalid.
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
Invalid session token
SQLSTATE: none assigned
Invalid session token type. Expected '<expected>
' but got '<actual>
'.
SQLSTATE: none assigned
The timestamp is invalid.
SQLSTATE: none assigned
Invalid upload type. Expected '<expected>
' but got '<actual>
'.
SQLSTATE: none assigned
The URL passed as parameter is invalid
SQLSTATE: none assigned
Query parameter 'overwrite' must be one of: true,false but was: <got_values>
SQLSTATE: none assigned
Query parameter '<parameter_name>' must be one of: <expected>
but was: <actual>
SQLSTATE: none assigned
Malformed request body
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
Requested method of Files API is not supported for Jobs Background Compute Artifact Storage.
SQLSTATE: none assigned
The content-length header is required in the request.
SQLSTATE: none assigned
Query parameter '<parameter_name>' is required but is missing from the request.
SQLSTATE: none assigned
The request is missing a required parameter.
SQLSTATE: none assigned
Model version is not ready yet
SQLSTATE: none assigned
Files API for <place>
is not enabled for this workspace/account
SQLSTATE: none assigned
Requested method of Files API is not supported for Internal Workspace Storage
SQLSTATE: none assigned
operation must be present
SQLSTATE: none assigned
page_size must greater or equal to 0
SQLSTATE: none assigned
Paths ending in the '/' character represent directories. This API does not support operations on directories.
SQLSTATE: none assigned
The given path points to an existing directory. This API does not support operations on directories.
SQLSTATE: none assigned
The given path points to an existing file. This API does not support operations on files.
SQLSTATE: none assigned
the given path was not a valid UTF-8 encoded URL
SQLSTATE: none assigned
Given path is not enabled for data plane proxy
SQLSTATE: none assigned
path must be present
SQLSTATE: none assigned
<rejection_message>
SQLSTATE: none assigned
Provided file path is too long.
SQLSTATE: none assigned
The request failed due to a precondition.
SQLSTATE: none assigned
Files API for presigned URLs for models are not supported at the moment
SQLSTATE: none assigned
R2 is unsupported at the moment.
SQLSTATE: none assigned
The range requested is not satisfiable.
SQLSTATE: none assigned
Recursively listing files is not supported.
SQLSTATE: none assigned
Request got routed incorrectly
SQLSTATE: none assigned
Request must include account information
SQLSTATE: none assigned
Request must include user information
SQLSTATE: none assigned
Request must include workspace information
SQLSTATE: none assigned
Resource is read-only.
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
The URL can't be accessed.
SQLSTATE: none assigned
The signature verification failed.
SQLSTATE: none assigned
Storage configuration for this workspace is not accessible.
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
Files API is not supported for <table_type>
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
<message>
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
<unity_catalog_error_message>
SQLSTATE: none assigned
Unexpected error when parsing the URI
SQLSTATE: none assigned
Unexpected query parameters: <unexpected_query_parameters>
SQLSTATE: none assigned
Unknown method <method>
SQLSTATE: none assigned
Unknown server error.
SQLSTATE: none assigned
The URL host is unknown.
SQLSTATE: none assigned
The provided path is not supported by the Files API. Make sure the provided path does not contain instances of '../' or './' sequences. Make sure the provided path does not use multiple consecutive slashes (e.g. '///').
SQLSTATE: none assigned
Presigned URL generation is not enabled for <cloud>
.
SQLSTATE: none assigned
Files API is not supported for <volume_type>.
SQLSTATE: none assigned
The workspace has been canceled.
SQLSTATE: none assigned
Storage configuration for this workspace is not accessible.
SQLSTATE: none assigned
Query on table <tableFullName>
with row filter or column mask assigned through policy rules isn't supported on assigned clusters.
SQLSTATE: none assigned
Azure Entra (aka Azure Active Directory) credentials missing.
Ensure you are either logged in with your Entra account
or have setup an Azure DevOps personal access token (PAT) in User Settings > Git Integration.
If you are not using a PAT and are using Azure DevOps with the Repos API,
you must use an Azure Entra access token.
SQLSTATE: none assigned
Encountered an error with your Azure Entra (Azure Active Directory) credentials. Please try logging out of
Entra https://portal.azure.cn) and logging back in.
Alternatively, you may also visit User Settings > Git Integration to set
up an Azure DevOps personal access token.
SQLSTATE: none assigned
Encountered an error with your Azure Active Directory credentials. Please try logging out of
Azure Active Directory https://portal.azure.cn) and logging back in.
SQLSTATE: none assigned
Credential generation for clean room delta sharing securable cannot be requested.
SQLSTATE: none assigned
Securable <securableName>
with type <securableType>
and kind <securableKind>
is clean room system managed, user does not have access.
SQLSTATE: none assigned
Constraint with name <constraintName>
already exists, choose a different name.
SQLSTATE: none assigned
Constraint <constraintName>
does not exist.
SQLSTATE: none assigned
Could not read remote repository (<repoUrl>
).
Your current Git credentials provider is <gitCredentialProvider>
and username is <gitCredentialUsername>
.
- Your remote Git repo URL is valid.
- Your personal access token or app password has the correct repo access.
Error from Git: <gitErrorMessage>
SQLSTATE: none assigned
Could not resolve host for <repoUrl>
.
SQLSTATE: none assigned
Parameter beginning_of_time
cannot be true.
SQLSTATE: none assigned
Requested objects could not be found for the continuation token.
SQLSTATE: none assigned
Provided both 'beginning_of_time=true' and a 'continuation_token'. When 'beginning_of_time' is set to 'true', 'continuation_token' should not be provided.
SQLSTATE: none assigned
Continuation token invalid. Cause: <msg>
SQLSTATE: none assigned
Invalid value <value>
for parameter max_objects, expected value in [<minValue>
, <maxValue>
]
SQLSTATE: none assigned
Invalid URI format. Expected a volume (e.g. "/Volumes/catalog/schema/volume") or cloud storage path (e.g. "s3://some-uri")
SQLSTATE: none assigned
Failed to list objects. There are problems on the location that need to be resolved. Details: <msg>
SQLSTATE: none assigned
No location found for uri <path>
SQLSTATE: none assigned
Unable to determine a metastore for the request.
SQLSTATE: none assigned
Service is disabled
SQLSTATE: none assigned
Unity Catalog entity not found. Ensure that the catalog, schema, volume and/or external location exists.
SQLSTATE: none assigned
Unity Catalog external location does not exist.
SQLSTATE: none assigned
URI overlaps with other volumes
SQLSTATE: none assigned
Unable to determine a metastore for the request. Metastore does not exist
SQLSTATE: none assigned
Permission denied
SQLSTATE: none assigned
Unity Catalog table does not exist.
SQLSTATE: none assigned
Unity Catalog volume does not exist.
SQLSTATE: none assigned
Must provide uri
SQLSTATE: none assigned
Provided uri is too long. Maximum permitted length is <maxLength>
.
SQLSTATE: none assigned
Databricks Default Storage cannot be accessed using Classic Compute. Please use Serverless compute to access data in Default Storage
SQLSTATE: none assigned
Operation failed because linked GitHub app credentials could not be refreshed.
Please try again or go to User Settings > Git Integration and try relinking your Git provider account.
If the problem persists, please file a support ticket.
SQLSTATE: none assigned
The link to your GitHub account does not have access. To fix this error:
- An admin of the repository must go tohttps://github.com/apps/databricks/installations/new and install the Databricks GitHub app on the repository.
Alternatively, a GitHub account owner can install the app on the account to give access to the account's repositories.
- If the app is already installed, have an admin ensure that if they are using scoped access with the 'Only select repositories' option, they have included access to this repository by selecting it.
Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.
If the problem persists, please file a support ticket.
SQLSTATE: none assigned
Linked GitHub app credentials expired after 6 months of inactivity.
Go to User Settings > Git Integration and try relinking your credentials.
If the problem persists, please file a support ticket.
SQLSTATE: none assigned
The link to your GitHub account does not have access. To fix this error:
- GitHub user
<gitCredentialUsername>
should go tohttps://github.com/apps/databricks/installations/new and install the app on the account<gitCredentialUsername>
to allow access. - If user
<gitCredentialUsername>
already installed the app and they are using scoped access with the 'Only select repositories' option, they should ensure they have included access to this repository by selecting it.
Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.
If the problem persists, please file a support ticket.
SQLSTATE: none assigned
The link to your GitHub account does not have access. To fix this error:
- An owner of the GitHub organization
<organizationName>
should go tohttps://github.com/apps/databricks/installations/new and install the app on the organization<organizationName>
to allow access. - If the app is already installed on GitHub organization
<organizationName>
, have an owner of this organization ensure that if using scoped access with the 'Only select repositories' option, they have included access to this repository by selecting it.
Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.
If the problem persists, please file a support ticket.
SQLSTATE: none assigned
The link to your GitHub account does not have access. To fix this error:
- Go tohttps://github.com/apps/databricks/installations/new and install the app on your account
<gitCredentialUsername>
to allow access. - If the app is already installed, and you are using scoped access with the 'Only select repositories' option, ensure that you have included access to this repository by selecting it.
Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.
If the problem persists, please file a support ticket.
SQLSTATE: none assigned
Invalid Git provider credentials for repository URL <repoUrl>
.
Your current Git credentials provider is <gitCredentialProvider>
and username is <gitCredentialUsername>
.
Go to User Settings > Git Integration to view your credential.
Please go to your remote Git provider to ensure that:
- You have entered the correct Git user email or username with your Git provider credentials.
- Your token or app password has the correct repo access.
- Your token has not expired.
- If you have SSO enabled with your Git provider, be sure to authorize your token.
SQLSTATE: none assigned
Invalid Git provider Personal Access Token credentials for repository URL <repoUrl>
.
Your current Git credentials provider is <gitCredentialProvider>
and username is <gitCredentialUsername>
.
Go to User Settings > Git Integration to view your credential.
Please go to your remote Git provider to ensure that:
- You have entered the correct Git user email or username with your Git provider credentials.
- Your token or app password has the correct repo access.
- Your token has not expired.
- If you have SSO enabled with your Git provider, be sure to authorize your token.
SQLSTATE: none assigned
No Git credential configured, but credential required for this repository (<repoUrl>
).
Go to User Settings > Git Integration to set up your Git credentials.
SQLSTATE: none assigned
Write access to <gitCredentialProvider>
repository (<repoUrl>
) not granted.
Make sure you (<gitCredentialUsername>
) have write access to this remote repository.
SQLSTATE: none assigned
Incorrect Git credential provider for repository.
Your current Git credential's provider (<gitCredentialProvider>
) does not match that of the repository's Git provider <repoUrl>
.
Try a different repository or go to User Settings > Git Integration to update your Git credentials.
SQLSTATE: none assigned
The Azure storage account does not have hierarchical namespace enabled.
SQLSTATE: none assigned
<rpcName> <fieldName>
too long. Maximum length is <maxLength>
characters.
SQLSTATE: none assigned
<msg>
For more details see INVALID_PARAMETER_VALUE
SQLSTATE: none assigned
Task Framework: Task Run Output for Task with runId <runId>
and orgId <orgId>
could not be found.
SQLSTATE: none assigned
Task Framework: Task Run State for Task with runId <runId>
and orgId <orgId>
could not be found.
SQLSTATE: none assigned
RPC ClientConfig for Task with ID <taskId>
does not exist.
SQLSTATE: none assigned
Task with ID <taskId>
does not exist.
SQLSTATE: none assigned
Task Registry: Unsupported or unknown JobTask with class <taskClassName>
.
SQLSTATE: none assigned
Path-based access to external shallow clone table <tableFullName>
is not supported. Please use table names to access the shallow clone instead.
SQLSTATE: none assigned
Fabric table located at url '<url>
' is not found. Please use the REFRESH FOREIGN CATALOG
command to populate Fabric tables.
SQLSTATE: none assigned
Path-based access to table <tableFullName>
with row filter or column mask not supported.
SQLSTATE: none assigned
User does not have <msg>
on <resourceType>
'<resourceName>
'.
SQLSTATE: none assigned
Unable to parse delete object request: <invalidInputMsg>
SQLSTATE: none assigned
Unable to delete object <resourceName>
that is not in trash
SQLSTATE: none assigned
Could not find or have permission to access resource <resourceId>
SQLSTATE: none assigned
Unable to find the resource from query id <queryId>
SQLSTATE: none assigned
Unable to create new query snippet
SQLSTATE: none assigned
The quota for the number of query snippets has been reached. The current quota is <quota>
.
SQLSTATE: none assigned
The specified trigger <trigger>
is already in use by another query snippet in this workspace.
SQLSTATE: none assigned
The requested resource <resourceName>
does not exist
SQLSTATE: none assigned
Unable to parse delete object request: <invalidInputMsg>
SQLSTATE: none assigned
Unable to restore object <resourceName>
that is not in trash
SQLSTATE: none assigned
Unable to trash already-trashed object <resourceName>
SQLSTATE: none assigned
Could not generate resource name from id <id>
SQLSTATE: none assigned
Unable to create new visualization
SQLSTATE: none assigned
Could not find visualization <visualizationId>
SQLSTATE: none assigned
The quota for the number of visualizations on query <query_id> has been reached. The current quota is <quota>
.
SQLSTATE: none assigned
Remote repository (<repoUrl>
) not found.
Your current Git credentials provider is <gitCredentialProvider>
and username is <gitCredentialUsername>
.
Please ensure that:
- Your remote Git repo URL is valid.
- Your personal access token or app password has the correct repo access.
SQLSTATE: none assigned
<resourceType>
'<resourceIdentifier>
' already exists
SQLSTATE: none assigned
<resourceType>
'<resourceIdentifier>
' does not exist.
SQLSTATE: none assigned
Query on table <tableFullName>
with row filter or column mask not supported on assigned clusters.
SQLSTATE: none assigned
Table <tableFullName>
is being shared with Delta Sharing, and cannot use row/column security.
SQLSTATE: none assigned
The <serviceName>
service is temporarily under maintenance. Please try again later.
SQLSTATE: none assigned
Table <tableFullName>
cannot have both row/column security and online materialized views.
SQLSTATE: none assigned
Too many rows to update, aborting update.