X Tutup
The Wayback Machine - https://web.archive.org/web/20201107152915/https://github.com/apache/flink/pull/12908
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-18449][table sql/api]Kafka topic discovery & partition discove… #12908

Merged
merged 1 commit into from Aug 20, 2020

Conversation

@fsk119
Copy link
Contributor

@fsk119 fsk119 commented Jul 15, 2020

…ry dynamically in table api

What is the purpose of the change

Enable Kafka Connector topic discovery & partition discovery in table api.

Brief change log

  • Expose option 'topic-pattern' and 'scan.topic-partition-discovery.interval'
  • Add validation for source when setting 'topic-pattern' and 'topic' together and setting 'topic-pattern' for sink.
  • Read value from Table option and use the value to build kafka consumer.

Verifying this change

This change added tests and can be verified as follows:

  • Added integration tests for new features
  • Added test that validates that setting topic and topic pattern together will fail and setting 'topic-pattern' for sink will fail.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
@flinkbot
Copy link

@flinkbot flinkbot commented Jul 15, 2020

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 1a4e5ee (Wed Jul 15 13:11:58 UTC 2020)

no warnings

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • 1. The [description] looks good.
  • 2. There is [consensus] that the contribution should go into to Flink.
  • 3. Needs [attention] from.
  • 4. The change fits into the overall [architecture].
  • 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier
@flinkbot
Copy link

@flinkbot flinkbot commented Jul 15, 2020

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build
@fsk119
Copy link
Contributor Author

@fsk119 fsk119 commented Jul 16, 2020

Copy link
Member

@wuchong wuchong left a comment

Thanks for the contribution @fsk119 , I left some comments.

Pattern pattern,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) {
Comment on lines 91 to 93

This comment has been minimized.

@wuchong

wuchong Jul 27, 2020
Member

Indent.


@Override
protected FlinkKafkaConsumerBase<RowData> createKafkaConsumer(
Pattern pattern,

This comment has been minimized.

@wuchong

wuchong Jul 27, 2020
Member

topicPattern

Pattern topicPattern,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) {
Comment on lines 91 to 93

This comment has been minimized.

@wuchong

wuchong Jul 27, 2020
Member

Indent.

DataType outputDataType,
@Nullable List<String> topics,
@Nullable Pattern topicPattern,
Properties properties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
StartupMode startupMode,
Map<KafkaTopicPartition, Long> specificStartupOffsets,
long startupTimestampMillis) {
Comment on lines 104 to 111

This comment has been minimized.

@wuchong

wuchong Jul 27, 2020
Member

Indent.

Pattern topicPattern,
Properties properties,
DeserializationSchema<RowData> deserializationSchema);
Comment on lines 200 to 202

This comment has been minimized.

@wuchong

wuchong Jul 27, 2020
Member

Indent.

docs/dev/table/connectors/kafka.md Outdated Show resolved Hide resolved
<td>optional for source(use 'topic' instead if not set)</td>
<td style="word-wrap: break-word;">(none)</td>
<td>String</td>
<td>Topic pattern from which the table is read. It will use input value to build regex expression to discover matched topics.</td>

This comment has been minimized.

@wuchong

wuchong Jul 27, 2020
Member

The regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of "topic-pattern" and "topic" can be specified for sources.

docs/dev/table/connectors/kafka.md Outdated Show resolved Hide resolved
docs/dev/table/connectors/kafka.md Outdated Show resolved Hide resolved
docs/dev/table/connectors/kafka.md Outdated Show resolved Hide resolved
@fsk119 fsk119 requested a review from wuchong Jul 30, 2020
options.add(SCAN_STARTUP_TIMESTAMP_MILLIS);
options.add(SCAN_TOPIC_PARTITION_DISCOVERY);

This comment has been minimized.

@wuchong

wuchong Aug 4, 2020
Member

duplicate

properties.setProperty(FlinkKafkaConsumerBase.KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS,
String.valueOf(tableOptions
.getOptional(SCAN_TOPIC_PARTITION_DISCOVERY)
.map(val -> val.toMillis())

This comment has been minimized.

@wuchong

wuchong Aug 4, 2020
Member

Suggested change
.map(val -> val.toMillis())
.map(Duration::toMillis)
));
} else {
throw new ValidationException(String.format(
errorMessageTemp, "topic-list", tableOptions.get(TOPIC)

This comment has been minimized.

@wuchong

wuchong Aug 4, 2020
Member

"topic-list" -> "topic"? We don't have "topic-list" option.

@@ -139,10 +152,12 @@ public void testTableSource() {
Thread.currentThread().getContextClassLoader());

// Test scan source equals
KAFKA_SOURCE_PROPERTIES.setProperty("flink.partition-discovery.interval-millis", "1000");

This comment has been minimized.

@wuchong

wuchong Aug 4, 2020
Member

Is it still needed? Because we have set it in static block.


private static boolean isSingleTopic(ReadableConfig tableOptions) {
// Option 'topic-pattern' is regarded as multi-topics.
return tableOptions.getOptional(TOPIC).isPresent() && tableOptions.get(TOPIC).split(",").length == 1;

This comment has been minimized.

@wuchong

wuchong Aug 4, 2020
Member

The community recommend to use List ConfigOption for list values, framework will handle the parsing. This will also change to use ; as the separator, but this is more align with other list options. You can declare a List ConfigOption by :

	public static final ConfigOption<List<String>> TOPIC = ConfigOptions
			.key("topic")
			.stringType()
			.asList()
			.noDefaultValue()
			.withDescription("...");

Then you can call return tableOptions.getOptional(TOPIC).map(t -> t.size() == 1).orElse(false); here.

Sorry for the late reminder.

@wuchong
Copy link
Member

@wuchong wuchong commented Aug 4, 2020

Btw, could you add an integration test for this?

…partition discovery for Kafka source in Table API

This closes #12908
@wuchong wuchong force-pushed the fsk119:FLINK-18449 branch from 4b06551 to 2769a8e Aug 19, 2020
Copy link
Member

@wuchong wuchong left a comment

LGTM.

Will merge once build is passed.

@Nullable List<String> topics,
@Nullable Pattern topicPattern,
Comment on lines +62 to +63

This comment has been minimized.

@wuchong

wuchong Aug 19, 2020
Member

Currently, it is very verbose to pass through these to parameters together here and there. An improvement is that we can use KafkaTopicsDescriptor, but this can be another issue in the future.

@wuchong wuchong merged commit b8ee51b into apache:master Aug 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

4 participants
You can’t perform that action at this time.
X Tutup