---
layout: Conceptual
title: array_except - Azure Databricks | Azure Docs
canonicalUrl: https://docs.azure.cn/en-us/databricks/pyspark/reference/functions/array_except
breadcrumb_path: /bread/toc.json
uhfHeaderId: mooncake
recommendations: false
author: rockboyfor
ms.author: v-edwardchen
ms.service: azure-databricks
ms.topic: reference
ms.reviewer: jasonh
ms.custom: databricksmigration
origin.date: 2026-01-26T00:00:00.0000000Z
ms.date: 2026-02-09T00:00:00.0000000Z
description: Learn how to use the array\_except function with PySpark
locale: en-us
document_id: be8fcb02-cf23-9d59-dd54-a6446994c4e3
document_version_independent_id: d4bffc33-0bd3-a96a-6df2-7ac0addf9bb8
updated_at: 2026-02-28T09:47:00.0000000Z
original_content_git_url: https://github.com/MicrosoftDocs/mc-docs-pr/blob/live/articles/databricks/pyspark/reference/functions/array_except.md
gitcommit: https://github.com/MicrosoftDocs/mc-docs-pr/blob/cb69c45676bbfeaf08a5792059754038cd57a299/articles/databricks/pyspark/reference/functions/array_except.md
git_commit_id: cb69c45676bbfeaf08a5792059754038cd57a299
site_name: DocsAzureCN
depot_name: Azure.mooncake-docs
page_type: conceptual
toc_rel: ../../../toc.json
feedback_system: None
feedback_product_url: ''
feedback_help_link_type: ''
feedback_help_link_url: ''
word_count: 239
asset_id: databricks/pyspark/reference/functions/array_except
moniker_range_name: 
monikers: []
item_type: Content
source_path: articles/databricks/pyspark/reference/functions/array_except.md
cmProducts:
- https://authoring-docs-microsoft.poolparty.biz/devrel/545d40c6-c50c-444b-b422-1c707eeab28e
- https://authoring-docs-microsoft.poolparty.biz/devrel/540ac133-a371-4dbb-8f94-28d6cc77a70b
- https://authoring-docs-microsoft.poolparty.biz/devrel/68ec7f3a-2bc6-459f-b959-19beb729907d
spProducts:
- https://authoring-docs-microsoft.poolparty.biz/devrel/b908d601-32e8-445a-b044-a507b5d1689e
- https://authoring-docs-microsoft.poolparty.biz/devrel/60bfc045-f127-4841-9d00-ea35495a5800
- https://authoring-docs-microsoft.poolparty.biz/devrel/90370425-aca4-4a39-9533-d52e5e002a5d
platformId: b6724377-f455-6adf-196a-44f624ced6d1
---

# array_except - Azure Databricks | Azure Docs

Returns a new array containing the elements present in col1 but not in col2, without duplicates.

## Syntax

```python
from pyspark.sql import functions as sf

sf.array_except(col1, col2)
```

## Parameters

| Parameter | Type | Description |
| --- | --- | --- |
| `col1` | `pyspark.sql.Column` or str | Name of column containing the first array. |
| `col2` | `pyspark.sql.Column` or str | Name of column containing the second array. |

## Returns

`pyspark.sql.Column`: A new array containing the elements present in col1 but not in col2.

## Examples

**Example 1**: Basic usage

```python
from pyspark.sql import Row, functions as sf
df = spark.createDataFrame([Row(c1=["b", "a", "c"], c2=["c", "d", "a", "f"])])
df.select(sf.array_except(df.c1, df.c2)).show()
```

```Output
+--------------------+
|array_except(c1, c2)|
+--------------------+
|                 [b]|
+--------------------+
```

**Example 2**: Except with no common elements

```python
from pyspark.sql import Row, functions as sf
df = spark.createDataFrame([Row(c1=["b", "a", "c"], c2=["d", "e", "f"])])
df.select(sf.sort_array(sf.array_except(df.c1, df.c2))).show()
```

```Output
+--------------------------------------+
|sort_array(array_except(c1, c2), true)|
+--------------------------------------+
|                             [a, b, c]|
+--------------------------------------+
```

**Example 3**: Except with all common elements

```python
from pyspark.sql import Row, functions as sf
df = spark.createDataFrame([Row(c1=["a", "b", "c"], c2=["a", "b", "c"])])
df.select(sf.array_except(df.c1, df.c2)).show()
```

```Output
+--------------------+
|array_except(c1, c2)|
+--------------------+
|                  []|
+--------------------+
```

**Example 4**: Except with null values

```python
from pyspark.sql import Row, functions as sf
df = spark.createDataFrame([Row(c1=["a", "b", None], c2=["a", None, "c"])])
df.select(sf.array_except(df.c1, df.c2)).show()
```

```Output
+--------------------+
|array_except(c1, c2)|
+--------------------+
|                 [b]|
+--------------------+
```

**Example 5**: Except with empty arrays

```python
from pyspark.sql import Row, functions as sf
from pyspark.sql.types import ArrayType, StringType, StructField, StructType
data = [Row(c1=[], c2=["a", "b", "c"])]
schema = StructType([
  StructField("c1", ArrayType(StringType()), True),
  StructField("c2", ArrayType(StringType()), True)
])
df = spark.createDataFrame(data, schema)
df.select(sf.array_except(df.c1, df.c2)).show()
```

```Output
+--------------------+
|array_except(c1, c2)|
+--------------------+
|                  []|
+--------------------+
```