Compartir a través de

如何创建使用 Azure Cosmos DB for NoSQL 和更改源处理器的 Java 应用

适用范围: NoSQL

Azure Cosmos DB 是 Azure 提供的完全托管的 NoSQL 数据库服务。 它使你能够轻松构建多区域分布式且高度可缩放的应用程序。 本操作指南将指导你完成创建 Java 应用程序的过程,该应用程序使用 Azure Cosmos DB for NoSQL 数据库,并实现用于实时数据处理的更改源处理器。 此 Java 应用使用 Azure Cosmos DB Java SDK v4 与 Azure Cosmos DB for NoSQL 通信。

重要

本教程仅适用于 Azure Cosmos DB Java SDK v4。 有关详细信息,请查看 Azure Cosmos DB Java SDK v4 发行说明Maven 存储库Azure Cosmos DB 中的更改源处理器和 Azure Cosmos DB Java SDK v4 故障排除指南。 如果你当前使用的是早于 v4 的版本,请参阅迁移到 Azure Cosmos DB Java SDK v4 指南,获取升级到 v4 的相关帮助。

先决条件

背景

Azure Cosmos DB 更改源提供了事件驱动的接口,用于触发操作来响应具有多种用途的文档插入。

管理更改源事件的工作主要由 SDK 中内置的更改源处理器库来完成。 此库足够强大,可以根据需要在多个工作线程之间分配更改源事件。 你所要做的就是为更改源库提供回调。

这个简单的 Java 应用程序示例演示了使用 Azure Cosmos DB 和更改源处理器进行实时数据处理的情况。 该应用程序将示例文档插入“源容器”来模拟数据流。 绑定到源容器的更改源处理器处理传入的更改并记录文档内容。 该处理器会自动管理并行处理的租约。

源代码

可克隆 SDK 示例存储库,并在 SampleChangeFeedProcessor.java 中找到此示例:

git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples.git
cd azure-cosmos-java-sql-api-sample/src/main/java/com/azure/cosmos/examples/changefeed/

演练

  1. 使用 Azure Cosmos DB 和 Azure Cosmos DB Java SDK V4 在 Java 应用程序中配置 ChangeFeedProcessorOptionsChangeFeedProcessorOptions 提供在数据处理期间控制更改源处理器行为的基本设置。
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
package com.azure.cosmos.examples.changefeed;

import com.azure.cosmos.ChangeFeedProcessor;
import com.azure.cosmos.ChangeFeedProcessorBuilder;
import com.azure.cosmos.ConsistencyLevel;
import com.azure.cosmos.CosmosAsyncClient;
import com.azure.cosmos.CosmosAsyncContainer;
import com.azure.cosmos.CosmosAsyncDatabase;
import com.azure.cosmos.CosmosClientBuilder;
import com.azure.cosmos.CosmosException;
import com.azure.cosmos.examples.common.AccountSettings;
import com.azure.cosmos.examples.common.CustomPOJO2;
import com.azure.cosmos.implementation.Utils;
import com.azure.cosmos.models.ChangeFeedProcessorOptions;
import com.azure.cosmos.models.CosmosContainerProperties;
import com.azure.cosmos.models.CosmosContainerRequestOptions;
import com.azure.cosmos.models.CosmosContainerResponse;
import com.azure.cosmos.models.CosmosDatabaseResponse;
import com.azure.cosmos.models.ThroughputProperties;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.scheduler.Schedulers;

import java.time.Duration;
import java.util.List;
import java.util.UUID;
import java.util.function.Consumer;

/**
 * Sample for Change Feed Processor.
 * This sample models an application where documents are being inserted into one container (the "feed container"),
 * and meanwhile another worker thread or worker application is pulling inserted documents from the feed container's Change Feed
 * and operating on them in some way. For one or more workers to process the Change Feed of a container, the workers must first contact the server
 * and "lease" access to monitor one or more partitions of the feed container. The Change Feed Processor Library
 * handles leasing automatically for you, however you must create a separate "lease container" where the Change Feed
 * Processor Library can store and track leases container partitions.
 */
public class SampleChangeFeedProcessor {

    public static int WAIT_FOR_WORK = 60000;
    public static final String DATABASE_NAME = "db_" + UUID.randomUUID();
    public static final String COLLECTION_NAME = "coll_" + UUID.randomUUID();
    private static final ObjectMapper OBJECT_MAPPER = Utils.getSimpleObjectMapper();
    protected static Logger logger = LoggerFactory.getLogger(SampleChangeFeedProcessor.class);

    private static boolean isWorkCompleted = false;

    private static ChangeFeedProcessorOptions options;



    public static void main(String[] args) {
        logger.info("Begin Sample");
        try {

            // <ChangeFeedProcessorOptions>
            options = new ChangeFeedProcessorOptions();
            options.setStartFromBeginning(false);
            options.setLeasePrefix("myChangeFeedDeploymentUnit");
            // </ChangeFeedProcessorOptions>

            //Summary of the next four commands:
            //-Create an asynchronous Azure Cosmos DB client and database so that we can issue async requests to the DB
            //-Create a "feed container" and a "lease container" in the DB
            logger.info("Create CosmosClient");
            CosmosAsyncClient client = getCosmosClient();

            logger.info("Create sample's database: " + DATABASE_NAME);
            CosmosAsyncDatabase cosmosDatabase = createNewDatabase(client, DATABASE_NAME);

            logger.info("Create container for documents: " + COLLECTION_NAME);
            CosmosAsyncContainer feedContainer = createNewCollection(client, DATABASE_NAME, COLLECTION_NAME);

            logger.info("Create container for lease: " + COLLECTION_NAME + "-leases");
            CosmosAsyncContainer leaseContainer = createNewLeaseCollection(client, DATABASE_NAME, COLLECTION_NAME + "-leases");

            //Model of a worker thread or application which leases access to monitor one or more feed container
            //partitions via the Change Feed. In a real-world application you might deploy this code in an Azure function.
            //The next line causes the worker to create and start an instance of the Change Feed Processor. See the implementation of getChangeFeedProcessor() for guidance
            //on creating a handler for Change Feed events. In this stream, we also trigger the insertion of 10 documents on a separate
            //thread.
            // <StartChangeFeedProcessor>
            logger.info("Start Change Feed Processor on worker (handles changes asynchronously)");
            ChangeFeedProcessor changeFeedProcessorInstance = new ChangeFeedProcessorBuilder()
                .hostName("SampleHost_1")
                .feedContainer(feedContainer)
                .leaseContainer(leaseContainer)
                .handleChanges(handleChanges())
                .buildChangeFeedProcessor();
            changeFeedProcessorInstance.start()
                                       .subscribeOn(Schedulers.boundedElastic())
                                       .subscribe();
            // </StartChangeFeedProcessor>

            //These two lines model an application which is inserting ten documents into the feed container
            logger.info("Start application that inserts documents into feed container");
            createNewDocumentsCustomPOJO(feedContainer, 10, Duration.ofSeconds(3));

            //This loop models the Worker main loop, which spins while its Change Feed Processor instance asynchronously
            //handles incoming Change Feed events from the feed container. Of course in this sample, polling
            //isWorkCompleted is unnecessary because items are being added to the feed container on the same thread, and you
            //can see just above isWorkCompleted is set to true.
            //But conceptually the worker is part of a different thread or application than the one which is inserting
            //into the feed container; so this code illustrates the worker waiting and listening for changes to the feed container
            long remainingWork = WAIT_FOR_WORK;
            while (!isWorkCompleted && remainingWork > 0) {
                Thread.sleep(100);
                remainingWork -= 100;
            }

            //When all documents have been processed, clean up
            if (isWorkCompleted) {
                changeFeedProcessorInstance.stop().subscribe();
            } else {
                throw new RuntimeException("The change feed processor initialization and automatic create document feeding process did not complete in the expected time");
            }

            logger.info("Delete sample's database: " + DATABASE_NAME);
            //deleteDatabase(cosmosDatabase);

            Thread.sleep(500);
        } catch (Exception e) {
            e.printStackTrace();
        }
        logger.info("End Sample");
    }

    // <Delegate>
    private static Consumer<List<JsonNode>> handleChanges() {
        return (List<JsonNode> docs) -> {
            logger.info("Start handleChanges()");

            for (JsonNode document : docs) {
                try {
                    //Change Feed hands the document to you in the form of a JsonNode
                    //As a developer you have two options for handling the JsonNode document provided to you by Change Feed
                    //One option is to operate on the document in the form of a JsonNode, as shown below. This is great
                    //especially if you do not have a single uniform data model for all documents.
                    logger.info("Document received: " + OBJECT_MAPPER.writerWithDefaultPrettyPrinter()
                            .writeValueAsString(document));

                    //You can also transform the JsonNode to a POJO having the same structure as the JsonNode,
                    //as shown below. Then you can operate on the POJO.
                    CustomPOJO2 pojo_doc = OBJECT_MAPPER.treeToValue(document, CustomPOJO2.class);
                    logger.info("id: " + pojo_doc.getId());

                } catch (JsonProcessingException e) {
                    e.printStackTrace();
                }
            }
            isWorkCompleted = true;
            logger.info("End handleChanges()");

        };
    }
    // </Delegate>

    public static CosmosAsyncClient getCosmosClient() {

        return new CosmosClientBuilder()
                .endpoint(AccountSettings.HOST)
                .key(AccountSettings.MASTER_KEY)
                .contentResponseOnWriteEnabled(true)
                .consistencyLevel(ConsistencyLevel.SESSION)
                .buildAsyncClient();
    }

    public static CosmosAsyncDatabase createNewDatabase(CosmosAsyncClient client, String databaseName) {
        CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName).block();
        return client.getDatabase(databaseResponse.getProperties().getId());
    }

    public static void deleteDatabase(CosmosAsyncDatabase cosmosDatabase) {
        cosmosDatabase.delete().block();
    }

    public static CosmosAsyncContainer createNewCollection(CosmosAsyncClient client, String databaseName, String collectionName) {
        CosmosAsyncDatabase databaseLink = client.getDatabase(databaseName);
        CosmosAsyncContainer collectionLink = databaseLink.getContainer(collectionName);
        CosmosContainerResponse containerResponse = null;

        try {
            containerResponse = collectionLink.read().block();

            if (containerResponse != null) {
                throw new IllegalArgumentException(String.format("Collection %s already exists in database %s.", collectionName, databaseName));
            }
        } catch (RuntimeException ex) {
            if (ex instanceof CosmosException) {
                CosmosException CosmosException = (CosmosException) ex;

                if (CosmosException.getStatusCode() != 404) {
                    throw ex;
                }
            } else {
                throw ex;
            }
        }

        CosmosContainerProperties containerSettings = new CosmosContainerProperties(collectionName, "/pk");
        CosmosContainerRequestOptions requestOptions = new CosmosContainerRequestOptions();

        ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(10000);

        containerResponse = databaseLink.createContainer(containerSettings, throughputProperties, requestOptions).block();

        if (containerResponse == null) {
            throw new RuntimeException(String.format("Failed to create collection %s in database %s.", collectionName, databaseName));
        }

        return databaseLink.getContainer(containerResponse.getProperties().getId());
    }

    public static CosmosAsyncContainer createNewLeaseCollection(CosmosAsyncClient client, String databaseName, String leaseCollectionName) {
        CosmosAsyncDatabase databaseLink = client.getDatabase(databaseName);
        CosmosAsyncContainer leaseCollectionLink = databaseLink.getContainer(leaseCollectionName);
        CosmosContainerResponse leaseContainerResponse = null;

        try {
            leaseContainerResponse = leaseCollectionLink.read().block();

            if (leaseContainerResponse != null) {
                leaseCollectionLink.delete().block();

                try {
                    Thread.sleep(1000);
                } catch (InterruptedException ex) {
                    ex.printStackTrace();
                }
            }
        } catch (RuntimeException ex) {
            if (ex instanceof CosmosException) {
                CosmosException CosmosException = (CosmosException) ex;

                if (CosmosException.getStatusCode() != 404) {
                    throw ex;
                }
            } else {
                throw ex;
            }
        }

        CosmosContainerProperties containerSettings = new CosmosContainerProperties(leaseCollectionName, "/id");
        CosmosContainerRequestOptions requestOptions = new CosmosContainerRequestOptions();

        ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);

        leaseContainerResponse = databaseLink.createContainer(containerSettings, throughputProperties, requestOptions).block();

        if (leaseContainerResponse == null) {
            throw new RuntimeException(String.format("Failed to create collection %s in database %s.", leaseCollectionName, databaseName));
        }

        return databaseLink.getContainer(leaseContainerResponse.getProperties().getId());
    }

    public static void createNewDocumentsCustomPOJO(CosmosAsyncContainer containerClient, int count, Duration delay) {
        String suffix = UUID.randomUUID().toString();
        for (int i = 0; i <= count; i++) {
            CustomPOJO2 document = new CustomPOJO2();
            document.setId(String.format("0%d-%s", i, suffix));
            document.setPk(document.getId()); // This is a very simple example, so we'll just have a partition key (/pk) field that we set equal to id

            containerClient.createItem(document).subscribe(doc -> {
                logger.info("Document write: " + doc);
            });

            long remainingWork = delay.toMillis();
            try {
                while (remainingWork > 0) {
                    Thread.sleep(100);
                    remainingWork -= 100;
                }
            } catch (InterruptedException iex) {
                // exception caught
                break;
            }
        }
    }
}
  1. 使用相关配置初始化 ChangeFeedProcessor,配置包括主机名、源容器、租用容器和数据处理逻辑。 start() 方法会启动数据处理,从而启用对来自源容器的传入数据更改的并发和实时处理。
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
package com.azure.cosmos.examples.changefeed;

import com.azure.cosmos.ChangeFeedProcessor;
import com.azure.cosmos.ChangeFeedProcessorBuilder;
import com.azure.cosmos.ConsistencyLevel;
import com.azure.cosmos.CosmosAsyncClient;
import com.azure.cosmos.CosmosAsyncContainer;
import com.azure.cosmos.CosmosAsyncDatabase;
import com.azure.cosmos.CosmosClientBuilder;
import com.azure.cosmos.CosmosException;
import com.azure.cosmos.examples.common.AccountSettings;
import com.azure.cosmos.examples.common.CustomPOJO2;
import com.azure.cosmos.implementation.Utils;
import com.azure.cosmos.models.ChangeFeedProcessorOptions;
import com.azure.cosmos.models.CosmosContainerProperties;
import com.azure.cosmos.models.CosmosContainerRequestOptions;
import com.azure.cosmos.models.CosmosContainerResponse;
import com.azure.cosmos.models.CosmosDatabaseResponse;
import com.azure.cosmos.models.ThroughputProperties;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.scheduler.Schedulers;

import java.time.Duration;
import java.util.List;
import java.util.UUID;
import java.util.function.Consumer;

/**
 * Sample for Change Feed Processor.
 * This sample models an application where documents are being inserted into one container (the "feed container"),
 * and meanwhile another worker thread or worker application is pulling inserted documents from the feed container's Change Feed
 * and operating on them in some way. For one or more workers to process the Change Feed of a container, the workers must first contact the server
 * and "lease" access to monitor one or more partitions of the feed container. The Change Feed Processor Library
 * handles leasing automatically for you, however you must create a separate "lease container" where the Change Feed
 * Processor Library can store and track leases container partitions.
 */
public class SampleChangeFeedProcessor {

    public static int WAIT_FOR_WORK = 60000;
    public static final String DATABASE_NAME = "db_" + UUID.randomUUID();
    public static final String COLLECTION_NAME = "coll_" + UUID.randomUUID();
    private static final ObjectMapper OBJECT_MAPPER = Utils.getSimpleObjectMapper();
    protected static Logger logger = LoggerFactory.getLogger(SampleChangeFeedProcessor.class);

    private static boolean isWorkCompleted = false;

    private static ChangeFeedProcessorOptions options;



    public static void main(String[] args) {
        logger.info("Begin Sample");
        try {

            // <ChangeFeedProcessorOptions>
            options = new ChangeFeedProcessorOptions();
            options.setStartFromBeginning(false);
            options.setLeasePrefix("myChangeFeedDeploymentUnit");
            // </ChangeFeedProcessorOptions>

            //Summary of the next four commands:
            //-Create an asynchronous Azure Cosmos DB client and database so that we can issue async requests to the DB
            //-Create a "feed container" and a "lease container" in the DB
            logger.info("Create CosmosClient");
            CosmosAsyncClient client = getCosmosClient();

            logger.info("Create sample's database: " + DATABASE_NAME);
            CosmosAsyncDatabase cosmosDatabase = createNewDatabase(client, DATABASE_NAME);

            logger.info("Create container for documents: " + COLLECTION_NAME);
            CosmosAsyncContainer feedContainer = createNewCollection(client, DATABASE_NAME, COLLECTION_NAME);

            logger.info("Create container for lease: " + COLLECTION_NAME + "-leases");
            CosmosAsyncContainer leaseContainer = createNewLeaseCollection(client, DATABASE_NAME, COLLECTION_NAME + "-leases");

            //Model of a worker thread or application which leases access to monitor one or more feed container
            //partitions via the Change Feed. In a real-world application you might deploy this code in an Azure function.
            //The next line causes the worker to create and start an instance of the Change Feed Processor. See the implementation of getChangeFeedProcessor() for guidance
            //on creating a handler for Change Feed events. In this stream, we also trigger the insertion of 10 documents on a separate
            //thread.
            // <StartChangeFeedProcessor>
            logger.info("Start Change Feed Processor on worker (handles changes asynchronously)");
            ChangeFeedProcessor changeFeedProcessorInstance = new ChangeFeedProcessorBuilder()
                .hostName("SampleHost_1")
                .feedContainer(feedContainer)
                .leaseContainer(leaseContainer)
                .handleChanges(handleChanges())
                .buildChangeFeedProcessor();
            changeFeedProcessorInstance.start()
                                       .subscribeOn(Schedulers.boundedElastic())
                                       .subscribe();
            // </StartChangeFeedProcessor>

            //These two lines model an application which is inserting ten documents into the feed container
            logger.info("Start application that inserts documents into feed container");
            createNewDocumentsCustomPOJO(feedContainer, 10, Duration.ofSeconds(3));

            //This loop models the Worker main loop, which spins while its Change Feed Processor instance asynchronously
            //handles incoming Change Feed events from the feed container. Of course in this sample, polling
            //isWorkCompleted is unnecessary because items are being added to the feed container on the same thread, and you
            //can see just above isWorkCompleted is set to true.
            //But conceptually the worker is part of a different thread or application than the one which is inserting
            //into the feed container; so this code illustrates the worker waiting and listening for changes to the feed container
            long remainingWork = WAIT_FOR_WORK;
            while (!isWorkCompleted && remainingWork > 0) {
                Thread.sleep(100);
                remainingWork -= 100;
            }

            //When all documents have been processed, clean up
            if (isWorkCompleted) {
                changeFeedProcessorInstance.stop().subscribe();
            } else {
                throw new RuntimeException("The change feed processor initialization and automatic create document feeding process did not complete in the expected time");
            }

            logger.info("Delete sample's database: " + DATABASE_NAME);
            //deleteDatabase(cosmosDatabase);

            Thread.sleep(500);
        } catch (Exception e) {
            e.printStackTrace();
        }
        logger.info("End Sample");
    }

    // <Delegate>
    private static Consumer<List<JsonNode>> handleChanges() {
        return (List<JsonNode> docs) -> {
            logger.info("Start handleChanges()");

            for (JsonNode document : docs) {
                try {
                    //Change Feed hands the document to you in the form of a JsonNode
                    //As a developer you have two options for handling the JsonNode document provided to you by Change Feed
                    //One option is to operate on the document in the form of a JsonNode, as shown below. This is great
                    //especially if you do not have a single uniform data model for all documents.
                    logger.info("Document received: " + OBJECT_MAPPER.writerWithDefaultPrettyPrinter()
                            .writeValueAsString(document));

                    //You can also transform the JsonNode to a POJO having the same structure as the JsonNode,
                    //as shown below. Then you can operate on the POJO.
                    CustomPOJO2 pojo_doc = OBJECT_MAPPER.treeToValue(document, CustomPOJO2.class);
                    logger.info("id: " + pojo_doc.getId());

                } catch (JsonProcessingException e) {
                    e.printStackTrace();
                }
            }
            isWorkCompleted = true;
            logger.info("End handleChanges()");

        };
    }
    // </Delegate>

    public static CosmosAsyncClient getCosmosClient() {

        return new CosmosClientBuilder()
                .endpoint(AccountSettings.HOST)
                .key(AccountSettings.MASTER_KEY)
                .contentResponseOnWriteEnabled(true)
                .consistencyLevel(ConsistencyLevel.SESSION)
                .buildAsyncClient();
    }

    public static CosmosAsyncDatabase createNewDatabase(CosmosAsyncClient client, String databaseName) {
        CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName).block();
        return client.getDatabase(databaseResponse.getProperties().getId());
    }

    public static void deleteDatabase(CosmosAsyncDatabase cosmosDatabase) {
        cosmosDatabase.delete().block();
    }

    public static CosmosAsyncContainer createNewCollection(CosmosAsyncClient client, String databaseName, String collectionName) {
        CosmosAsyncDatabase databaseLink = client.getDatabase(databaseName);
        CosmosAsyncContainer collectionLink = databaseLink.getContainer(collectionName);
        CosmosContainerResponse containerResponse = null;

        try {
            containerResponse = collectionLink.read().block();

            if (containerResponse != null) {
                throw new IllegalArgumentException(String.format("Collection %s already exists in database %s.", collectionName, databaseName));
            }
        } catch (RuntimeException ex) {
            if (ex instanceof CosmosException) {
                CosmosException CosmosException = (CosmosException) ex;

                if (CosmosException.getStatusCode() != 404) {
                    throw ex;
                }
            } else {
                throw ex;
            }
        }

        CosmosContainerProperties containerSettings = new CosmosContainerProperties(collectionName, "/pk");
        CosmosContainerRequestOptions requestOptions = new CosmosContainerRequestOptions();

        ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(10000);

        containerResponse = databaseLink.createContainer(containerSettings, throughputProperties, requestOptions).block();

        if (containerResponse == null) {
            throw new RuntimeException(String.format("Failed to create collection %s in database %s.", collectionName, databaseName));
        }

        return databaseLink.getContainer(containerResponse.getProperties().getId());
    }

    public static CosmosAsyncContainer createNewLeaseCollection(CosmosAsyncClient client, String databaseName, String leaseCollectionName) {
        CosmosAsyncDatabase databaseLink = client.getDatabase(databaseName);
        CosmosAsyncContainer leaseCollectionLink = databaseLink.getContainer(leaseCollectionName);
        CosmosContainerResponse leaseContainerResponse = null;

        try {
            leaseContainerResponse = leaseCollectionLink.read().block();

            if (leaseContainerResponse != null) {
                leaseCollectionLink.delete().block();

                try {
                    Thread.sleep(1000);
                } catch (InterruptedException ex) {
                    ex.printStackTrace();
                }
            }
        } catch (RuntimeException ex) {
            if (ex instanceof CosmosException) {
                CosmosException CosmosException = (CosmosException) ex;

                if (CosmosException.getStatusCode() != 404) {
                    throw ex;
                }
            } else {
                throw ex;
            }
        }

        CosmosContainerProperties containerSettings = new CosmosContainerProperties(leaseCollectionName, "/id");
        CosmosContainerRequestOptions requestOptions = new CosmosContainerRequestOptions();

        ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);

        leaseContainerResponse = databaseLink.createContainer(containerSettings, throughputProperties, requestOptions).block();

        if (leaseContainerResponse == null) {
            throw new RuntimeException(String.format("Failed to create collection %s in database %s.", leaseCollectionName, databaseName));
        }

        return databaseLink.getContainer(leaseContainerResponse.getProperties().getId());
    }

    public static void createNewDocumentsCustomPOJO(CosmosAsyncContainer containerClient, int count, Duration delay) {
        String suffix = UUID.randomUUID().toString();
        for (int i = 0; i <= count; i++) {
            CustomPOJO2 document = new CustomPOJO2();
            document.setId(String.format("0%d-%s", i, suffix));
            document.setPk(document.getId()); // This is a very simple example, so we'll just have a partition key (/pk) field that we set equal to id

            containerClient.createItem(document).subscribe(doc -> {
                logger.info("Document write: " + doc);
            });

            long remainingWork = delay.toMillis();
            try {
                while (remainingWork > 0) {
                    Thread.sleep(100);
                    remainingWork -= 100;
                }
            } catch (InterruptedException iex) {
                // exception caught
                break;
            }
        }
    }
}
  1. 指定委托使用 handleChanges() 方法来处理传入的数据更改。 该方法会处理从更改源接收的 JsonNode 文档。 作为开发人员,你有两种方法来处理更改源提供给你的 JsonNode 文档。 一种方法是以 JsonNode 的形式操作文档。 这非常有用,尤其是当你没有一个适合所有文档的统一数据模型时。 第二种方法是将 JsonNode 转换为与 JsonNode 具有相同结构的 POJO。 然后,你可以在 POJO 上操作。
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
package com.azure.cosmos.examples.changefeed;

import com.azure.cosmos.ChangeFeedProcessor;
import com.azure.cosmos.ChangeFeedProcessorBuilder;
import com.azure.cosmos.ConsistencyLevel;
import com.azure.cosmos.CosmosAsyncClient;
import com.azure.cosmos.CosmosAsyncContainer;
import com.azure.cosmos.CosmosAsyncDatabase;
import com.azure.cosmos.CosmosClientBuilder;
import com.azure.cosmos.CosmosException;
import com.azure.cosmos.examples.common.AccountSettings;
import com.azure.cosmos.examples.common.CustomPOJO2;
import com.azure.cosmos.implementation.Utils;
import com.azure.cosmos.models.ChangeFeedProcessorOptions;
import com.azure.cosmos.models.CosmosContainerProperties;
import com.azure.cosmos.models.CosmosContainerRequestOptions;
import com.azure.cosmos.models.CosmosContainerResponse;
import com.azure.cosmos.models.CosmosDatabaseResponse;
import com.azure.cosmos.models.ThroughputProperties;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import reactor.core.scheduler.Schedulers;

import java.time.Duration;
import java.util.List;
import java.util.UUID;
import java.util.function.Consumer;

/**
 * Sample for Change Feed Processor.
 * This sample models an application where documents are being inserted into one container (the "feed container"),
 * and meanwhile another worker thread or worker application is pulling inserted documents from the feed container's Change Feed
 * and operating on them in some way. For one or more workers to process the Change Feed of a container, the workers must first contact the server
 * and "lease" access to monitor one or more partitions of the feed container. The Change Feed Processor Library
 * handles leasing automatically for you, however you must create a separate "lease container" where the Change Feed
 * Processor Library can store and track leases container partitions.
 */
public class SampleChangeFeedProcessor {

    public static int WAIT_FOR_WORK = 60000;
    public static final String DATABASE_NAME = "db_" + UUID.randomUUID();
    public static final String COLLECTION_NAME = "coll_" + UUID.randomUUID();
    private static final ObjectMapper OBJECT_MAPPER = Utils.getSimpleObjectMapper();
    protected static Logger logger = LoggerFactory.getLogger(SampleChangeFeedProcessor.class);

    private static boolean isWorkCompleted = false;

    private static ChangeFeedProcessorOptions options;



    public static void main(String[] args) {
        logger.info("Begin Sample");
        try {

            // <ChangeFeedProcessorOptions>
            options = new ChangeFeedProcessorOptions();
            options.setStartFromBeginning(false);
            options.setLeasePrefix("myChangeFeedDeploymentUnit");
            // </ChangeFeedProcessorOptions>

            //Summary of the next four commands:
            //-Create an asynchronous Azure Cosmos DB client and database so that we can issue async requests to the DB
            //-Create a "feed container" and a "lease container" in the DB
            logger.info("Create CosmosClient");
            CosmosAsyncClient client = getCosmosClient();

            logger.info("Create sample's database: " + DATABASE_NAME);
            CosmosAsyncDatabase cosmosDatabase = createNewDatabase(client, DATABASE_NAME);

            logger.info("Create container for documents: " + COLLECTION_NAME);
            CosmosAsyncContainer feedContainer = createNewCollection(client, DATABASE_NAME, COLLECTION_NAME);

            logger.info("Create container for lease: " + COLLECTION_NAME + "-leases");
            CosmosAsyncContainer leaseContainer = createNewLeaseCollection(client, DATABASE_NAME, COLLECTION_NAME + "-leases");

            //Model of a worker thread or application which leases access to monitor one or more feed container
            //partitions via the Change Feed. In a real-world application you might deploy this code in an Azure function.
            //The next line causes the worker to create and start an instance of the Change Feed Processor. See the implementation of getChangeFeedProcessor() for guidance
            //on creating a handler for Change Feed events. In this stream, we also trigger the insertion of 10 documents on a separate
            //thread.
            // <StartChangeFeedProcessor>
            logger.info("Start Change Feed Processor on worker (handles changes asynchronously)");
            ChangeFeedProcessor changeFeedProcessorInstance = new ChangeFeedProcessorBuilder()
                .hostName("SampleHost_1")
                .feedContainer(feedContainer)
                .leaseContainer(leaseContainer)
                .handleChanges(handleChanges())
                .buildChangeFeedProcessor();
            changeFeedProcessorInstance.start()
                                       .subscribeOn(Schedulers.boundedElastic())
                                       .subscribe();
            // </StartChangeFeedProcessor>

            //These two lines model an application which is inserting ten documents into the feed container
            logger.info("Start application that inserts documents into feed container");
            createNewDocumentsCustomPOJO(feedContainer, 10, Duration.ofSeconds(3));

            //This loop models the Worker main loop, which spins while its Change Feed Processor instance asynchronously
            //handles incoming Change Feed events from the feed container. Of course in this sample, polling
            //isWorkCompleted is unnecessary because items are being added to the feed container on the same thread, and you
            //can see just above isWorkCompleted is set to true.
            //But conceptually the worker is part of a different thread or application than the one which is inserting
            //into the feed container; so this code illustrates the worker waiting and listening for changes to the feed container
            long remainingWork = WAIT_FOR_WORK;
            while (!isWorkCompleted && remainingWork > 0) {
                Thread.sleep(100);
                remainingWork -= 100;
            }

            //When all documents have been processed, clean up
            if (isWorkCompleted) {
                changeFeedProcessorInstance.stop().subscribe();
            } else {
                throw new RuntimeException("The change feed processor initialization and automatic create document feeding process did not complete in the expected time");
            }

            logger.info("Delete sample's database: " + DATABASE_NAME);
            //deleteDatabase(cosmosDatabase);

            Thread.sleep(500);
        } catch (Exception e) {
            e.printStackTrace();
        }
        logger.info("End Sample");
    }

    // <Delegate>
    private static Consumer<List<JsonNode>> handleChanges() {
        return (List<JsonNode> docs) -> {
            logger.info("Start handleChanges()");

            for (JsonNode document : docs) {
                try {
                    //Change Feed hands the document to you in the form of a JsonNode
                    //As a developer you have two options for handling the JsonNode document provided to you by Change Feed
                    //One option is to operate on the document in the form of a JsonNode, as shown below. This is great
                    //especially if you do not have a single uniform data model for all documents.
                    logger.info("Document received: " + OBJECT_MAPPER.writerWithDefaultPrettyPrinter()
                            .writeValueAsString(document));

                    //You can also transform the JsonNode to a POJO having the same structure as the JsonNode,
                    //as shown below. Then you can operate on the POJO.
                    CustomPOJO2 pojo_doc = OBJECT_MAPPER.treeToValue(document, CustomPOJO2.class);
                    logger.info("id: " + pojo_doc.getId());

                } catch (JsonProcessingException e) {
                    e.printStackTrace();
                }
            }
            isWorkCompleted = true;
            logger.info("End handleChanges()");

        };
    }
    // </Delegate>

    public static CosmosAsyncClient getCosmosClient() {

        return new CosmosClientBuilder()
                .endpoint(AccountSettings.HOST)
                .key(AccountSettings.MASTER_KEY)
                .contentResponseOnWriteEnabled(true)
                .consistencyLevel(ConsistencyLevel.SESSION)
                .buildAsyncClient();
    }

    public static CosmosAsyncDatabase createNewDatabase(CosmosAsyncClient client, String databaseName) {
        CosmosDatabaseResponse databaseResponse = client.createDatabaseIfNotExists(databaseName).block();
        return client.getDatabase(databaseResponse.getProperties().getId());
    }

    public static void deleteDatabase(CosmosAsyncDatabase cosmosDatabase) {
        cosmosDatabase.delete().block();
    }

    public static CosmosAsyncContainer createNewCollection(CosmosAsyncClient client, String databaseName, String collectionName) {
        CosmosAsyncDatabase databaseLink = client.getDatabase(databaseName);
        CosmosAsyncContainer collectionLink = databaseLink.getContainer(collectionName);
        CosmosContainerResponse containerResponse = null;

        try {
            containerResponse = collectionLink.read().block();

            if (containerResponse != null) {
                throw new IllegalArgumentException(String.format("Collection %s already exists in database %s.", collectionName, databaseName));
            }
        } catch (RuntimeException ex) {
            if (ex instanceof CosmosException) {
                CosmosException CosmosException = (CosmosException) ex;

                if (CosmosException.getStatusCode() != 404) {
                    throw ex;
                }
            } else {
                throw ex;
            }
        }

        CosmosContainerProperties containerSettings = new CosmosContainerProperties(collectionName, "/pk");
        CosmosContainerRequestOptions requestOptions = new CosmosContainerRequestOptions();

        ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(10000);

        containerResponse = databaseLink.createContainer(containerSettings, throughputProperties, requestOptions).block();

        if (containerResponse == null) {
            throw new RuntimeException(String.format("Failed to create collection %s in database %s.", collectionName, databaseName));
        }

        return databaseLink.getContainer(containerResponse.getProperties().getId());
    }

    public static CosmosAsyncContainer createNewLeaseCollection(CosmosAsyncClient client, String databaseName, String leaseCollectionName) {
        CosmosAsyncDatabase databaseLink = client.getDatabase(databaseName);
        CosmosAsyncContainer leaseCollectionLink = databaseLink.getContainer(leaseCollectionName);
        CosmosContainerResponse leaseContainerResponse = null;

        try {
            leaseContainerResponse = leaseCollectionLink.read().block();

            if (leaseContainerResponse != null) {
                leaseCollectionLink.delete().block();

                try {
                    Thread.sleep(1000);
                } catch (InterruptedException ex) {
                    ex.printStackTrace();
                }
            }
        } catch (RuntimeException ex) {
            if (ex instanceof CosmosException) {
                CosmosException CosmosException = (CosmosException) ex;

                if (CosmosException.getStatusCode() != 404) {
                    throw ex;
                }
            } else {
                throw ex;
            }
        }

        CosmosContainerProperties containerSettings = new CosmosContainerProperties(leaseCollectionName, "/id");
        CosmosContainerRequestOptions requestOptions = new CosmosContainerRequestOptions();

        ThroughputProperties throughputProperties = ThroughputProperties.createManualThroughput(400);

        leaseContainerResponse = databaseLink.createContainer(containerSettings, throughputProperties, requestOptions).block();

        if (leaseContainerResponse == null) {
            throw new RuntimeException(String.format("Failed to create collection %s in database %s.", leaseCollectionName, databaseName));
        }

        return databaseLink.getContainer(leaseContainerResponse.getProperties().getId());
    }

    public static void createNewDocumentsCustomPOJO(CosmosAsyncContainer containerClient, int count, Duration delay) {
        String suffix = UUID.randomUUID().toString();
        for (int i = 0; i <= count; i++) {
            CustomPOJO2 document = new CustomPOJO2();
            document.setId(String.format("0%d-%s", i, suffix));
            document.setPk(document.getId()); // This is a very simple example, so we'll just have a partition key (/pk) field that we set equal to id

            containerClient.createItem(document).subscribe(doc -> {
                logger.info("Document write: " + doc);
            });

            long remainingWork = delay.toMillis();
            try {
                while (remainingWork > 0) {
                    Thread.sleep(100);
                    remainingWork -= 100;
                }
            } catch (InterruptedException iex) {
                // exception caught
                break;
            }
        }
    }
}
  1. 生成和运行 Java 应用程序。 该应用程序会启动更改源处理器,将示例文档插入源容器,并处理传入的更改。

结束语

在本指南中,你了解了如何使用 Azure Cosmos DB Java SDK V4 创建 Java 应用程序,该应用程序使用 Azure Cosmos DB for NoSQL 数据库,并使用更改源处理器来实时处理数据。 可以扩展此应用程序来处理更复杂的用例,并使用 Azure Cosmos DB 生成可靠、可缩放且多区域分布式应用程序。

其他资源

后续步骤

现在,可通过以下文章继续详细了解更改源估算器: