-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-36931][cdc] FlinkCDC YAML supports batch mode #3812
base: master
Are you sure you want to change the base?
Conversation
Code implementationTopology graph: Source -> PreTransform -> PostTransform -> SchemaBatchOperator-> PartitionBy(Batch) -> BatchSink
|
|
During the test, a new bug was discovered and has been fixed. This PR relies on this fix. #3826 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @aiwenmo for this contribution, left some comments.
And a end-to-end test is also welcomed.
...src/main/java/org/apache/flink/cdc/runtime/operators/schema/regular/SchemaBatchOperator.java
Outdated
Show resolved
Hide resolved
...src/main/java/org/apache/flink/cdc/runtime/operators/schema/regular/SchemaBatchOperator.java
Outdated
Show resolved
Hide resolved
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/pipeline/PipelineOptions.java
Outdated
Show resolved
Hide resolved
...src/main/java/org/apache/flink/cdc/runtime/operators/sink/DataBatchSinkFunctionOperator.java
Outdated
Show resolved
Hide resolved
...e/src/main/java/org/apache/flink/cdc/runtime/operators/sink/DataBatchSinkWriterOperator.java
Outdated
Show resolved
Hide resolved
...ain/java/org/apache/flink/cdc/connectors/mysql/source/reader/MySqlPipelineRecordEmitter.java
Outdated
Show resolved
Hide resolved
...rc/main/java/org/apache/flink/cdc/runtime/partitioning/RegularPrePartitionBatchOperator.java
Outdated
Show resolved
Hide resolved
...nector-mysql/src/main/java/org/apache/flink/cdc/connectors/mysql/source/MySqlDataSource.java
Outdated
Show resolved
Hide resolved
...ysql/src/main/java/org/apache/flink/cdc/connectors/mysql/factory/MySqlDataSourceFactory.java
Outdated
Show resolved
Hide resolved
...time/src/main/java/org/apache/flink/cdc/runtime/operators/schema/common/SchemaDerivator.java
Show resolved
Hide resolved
...src/main/java/org/apache/flink/cdc/runtime/operators/sink/DataBatchSinkFunctionOperator.java
Outdated
Show resolved
Hide resolved
...e/src/main/java/org/apache/flink/cdc/runtime/operators/sink/DataBatchSinkWriterOperator.java
Outdated
Show resolved
Hide resolved
...rc/main/java/org/apache/flink/cdc/runtime/operators/transform/PreBatchTransformOperator.java
Outdated
Show resolved
Hide resolved
I think an e2e test that run in batch mode with transform module is necessary to verify the whole pipeline is runnable. |
Hi. I'm in the process of coding and testing. |
Premise
MysqlCDC supports snapshot mode
MysqlCDC in Flink CDC (MySqlSource) supports StartupMode.SNAPSHOT and is of Boundedness.BOUNDED, and can run in RuntimeExecutionMode.BATCH.
Streaming VS Batch
Stream mode is suitable for job types including: jobs with high real-time requirements; in non-real-time scenarios, stateless jobs with many Shuffle steps; jobs that require continuous and stable data processing capabilities; jobs with small states, simple topologies and low fault tolerance costs.
Batch mode is suitable for job types including: in non-real-time scenarios, jobs with a large number of stateful operators; jobs that require high resource utilization; jobs with large states, complex topologies and low fault tolerance costs.
Expectation
Full snapshot synchronization
The FlinkCDC YAML job only reads the full snapshot data of the database and then writes it to the target database in Streaming or Batch mode. It is mainly used for full catch-up.
Currently, the SNAPSHOT startup strategy of the FlinkCDC YAML job can run correctly in the Streaming mode; it cannot run correctly in the Batch mode.
Full-incremental offline
The FlinkCDC YAML job collects full snapshot data + incremental log data from the final Offset of the full-incremental snapshot algorithm to the current EndingOffset for the first run; for subsequent runs, it collects from the last EndingOffset to the current EndingOffset.
The job runs in Batch mode. Users can schedule the job periodically, tolerate data delays for a certain period of time (such as hourly or daily), and ensure eventual consistency. Since the periodically scheduled incremental job only collects logs between the last EndingOffset and the current EndingOffset, duplicate full collection of data is avoided.
Test
Full snapshot synchronization in Batch mode
Solution
Use StartupMode.SNAPSHOT + Streaming for full snapshot synchronization
There is no need to modify the source code. For MysqlCDC, after specifying StartupMode.SNAPSHOT, the full snapshot synchronization job of the entire database can be run in the streaming mode. Although it is not the optimal solution, this capability can be achieved currently.
Expand the FlinkPipelineComposer applicable to the Batch mode to support full Batch synchronization
Topology graph: Source -> PreTransform -> PostTransform -> Router -> PartitionBy -> Sink
There are no change events in the Batch mode, and Schema Evolution does not need to be considered. In addition, the automatic table creation is completed before the job starts.
The field derivation of transform can be placed before the job starts instead of during runtime. Other operations such as the derivation of Router can also be placed before the job starts.
Workload: Implement the Batch construction strategy of FlinkPipelineComposer. Router needs to be independent, and Sink needs to be extended or transformed to support the implementation that does not require a coordinator (it would be better if Batch writing can be achieved).
Expand StartupMode to support users specifying the Offset range to support incremental offline synchronization
Allow users to specify the collection Offset range of the binlog, and then the user's own platform records the EndingOffset of each execution, as well as the periodic scheduling by the platform.
Discussion
1.Is it necessary to implement support for Batch mode because the benefits brought by Batch are small or the performance is not as good as Streaming. Specifically, which Batch optimizations can be used?
2.Whether the full-incremental offline method should be implemented (users can periodically schedule incremental log synchronization)?
Code implementation
Topology graph: Source -> PreBatchTransform -> PostTransform -> SchemaBatchOperator -> PartitionBy(Batch) -> BatchSink
ps: The data flow only contains CreateTableEvent and DataChangeEvent (insert).
Implementation ideas
1.Source first sends all CreateTableEvents, then sends snapshot data.
2.PreTransform doesn't need to cache the state and resume, and PostTransform is no changes in other cases.
3.When SchemaBatchOperator receives the CreateTableEvent, it is only stored in the cache and no events are sent.
4.When SchemaBatchOperator receives the first DataChangeEvent, the widest downstream table structure is deduced based on the router rule, and then the table creation statement is executed in the external data source. Subsequently, the wide table structure is sent to BatchPrePartition.
5.BatchPrePartition broadcasts the CreateTableEvent to PostPartition. BatchPrePartition partitions and distributes the DataChangeEvent to PostPartition based on the table ID and primary key information.
6.PostPartition issues the CreateTableEvent and DataChangeEvent to BatchSink, and BatchSink performs batch writing.
Implementation effect
Computing node 1: Source -> PreBatchTransform -> PostTransform -> SchemaBatchOperator -> BatchPrePartition
Computing node 2: PostPartition -> BatchSink
Batch mode: Computing node 2 starts computing only after computing node 1 is completely finished.