Java Code Examples for io.debezium.connector.base.ChangeEventQueue

Following code examples demonstrate how to use io.debezium.connector.base.ChangeEventQueuefrom debezium. These examples are extracted from various highly rated open source projects. You can directly use these code snippets or view their entire linked source code. These snippets are extracted to provide contextual information about how to use this class in the real world. These samples also let you understand some good practices on how to use io.debezium.connector.base.ChangeEventQueueand various code implementation of this class.

    public ErrorHandler(Class<? extends SourceConnector> connectorType, String logicalName, ChangeEventQueue<?> queue, Runnable onThrowable) {
        this.queue = queue;
        this.onThrowable = onThrowable;
        this.executor = Threads.newSingleThreadExecutor(connectorType, logicalName, "error-handler");
        this.producerThrowable = new AtomicReference<>();
    } 


    public void start(Configuration config) {
        if (running.get()) {
            return;
        }

        PostgresConnectorConfig connectorConfig = new PostgresConnectorConfig(config);

        TypeRegistry typeRegistry;
        try (final PostgresConnection connection = new PostgresConnection(connectorConfig.jdbcConfig())) {
            typeRegistry = connection.getTypeRegistry();
        }

        TopicSelector topicSelector = TopicSelector.create(connectorConfig);
        PostgresSchema schema = new PostgresSchema(connectorConfig, typeRegistry, topicSelector);
        this.taskContext = new PostgresTaskContext(connectorConfig, schema, topicSelector);

        SourceInfo sourceInfo = new SourceInfo(connectorConfig.serverName());
        Map<String, Object> existingOffset = context.offsetStorageReader().offset(sourceInfo.partition());
        LoggingContext.PreviousContext previousContext = taskContext.configureLoggingContext(CONTEXT_NAME);
        try {
            try (PostgresConnection connection = taskContext.createConnection()) {
                logger.info(connection.serverInfo().toString());
            }

            if (existingOffset == null) {
                logger.info("No previous offset found");
                if (connectorConfig.snapshotNeverAllowed()) {
                    logger.info("Snapshots are not allowed as per configuration, starting streaming logical changes only");
                    producer = new RecordsStreamProducer(taskContext, sourceInfo);
                } else {
                    createSnapshotProducer(taskContext, sourceInfo, connectorConfig.initialOnlySnapshot());
                }
            } else {
                sourceInfo.load(existingOffset);
                logger.info("Found previous offset {}", sourceInfo);
                if (sourceInfo.isSnapshotInEffect()) {
                    if (connectorConfig.snapshotNeverAllowed()) {
                        String msg = "The connector previously stopped while taking a snapshot, but now the connector is configured "
                                     + "to never allow snapshots. Reconfigure the connector to use snapshots initially or when needed.";
                        throw new ConnectException(msg);
                    } else {
                        logger.info("Found previous incomplete snapshot");
                        createSnapshotProducer(taskContext, sourceInfo, connectorConfig.initialOnlySnapshot());
                    }
                } else if (connectorConfig.alwaysTakeSnapshot()) {
                    logger.info("Taking a new snapshot as per configuration");
                    producer = new RecordsSnapshotProducer(taskContext, sourceInfo, true);
                } else {
                    logger.info(
                            "Previous snapshot has completed successfully, streaming logical changes from last known position");
                    producer = new RecordsStreamProducer(taskContext, sourceInfo);
                }
            }

            changeEventQueue = new ChangeEventQueue.Builder<ChangeEvent>()
                .pollInterval(connectorConfig.getPollInterval())
                .maxBatchSize(connectorConfig.getMaxBatchSize())
                .maxQueueSize(connectorConfig.getMaxQueueSize())
                .loggingContextSupplier(() -> taskContext.configureLoggingContext(CONTEXT_NAME))
                .build();

            producer.start(changeEventQueue::enqueue, changeEventQueue::producerFailure);
            running.compareAndSet(false, true);
        }  catch (SQLException e) {
            throw new ConnectException(e);
        } finally {
            previousContext.restore();
        }
    } 

Advertisement
Javadoc
A queue which serves as handover point between producer threads (e.g. MySQL's binlog reader thread)

and the Kafka Connect polling loop.

The queue is configurable in different aspects, e.g. its maximum size and the time to sleep (block) between two subsequent poll calls. See the Builder for the different options. The queue applies back-pressure semantics, i.e. if it holds the maximum number of elements, subsequent calls to #enqueue(Object) will block until elements have been removed from the queue.

If an exception occurs on the producer side, the producer should make that exception known by calling #producerFailure before stopping its operation. Upon the next call to #poll(), that exception will be raised, causing Kafka Connect to stop the connector and mark it as FAILED. @author Gunnar Morling @param the type of events in this queue. Usually SourceRecord is used, but in cases where additional metadata must be passed from producers to the consumer, a custom type wrapping source records may be used.

Read More
Advertisement