admin管理员组文章数量:1391977
I'm having problem trying ad hoc snapshot in debezium postgres source connector. I already created the signalling table and add the signal table to my debezium config.
When I tried to insert new row to trigger ad hoc snapshot. I got this from the log
2025-03-12 10:48:42,698 INFO [postgres-source-connector4|task-0] Requested 'INCREMENTAL' snapshot of data collections '[public.myTable]' with additional conditions '[]' and surrogate key 'mytable_id' (io.debezium.pipeline.signal.actions.snapshotting.ExecuteSnapshot) [debezium-postgresconnector-mytable-change-event-source-coordinator]
2025-03-12 10:48:42,720 INFO [postgres-source-connector4|task-0] No maximum key returned by the query, incremental snapshotting of table 'public.mytable' finished as it is empty (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource) [debezium-postgresconnector-mytable-change-event-source-coordinator]
But my table is not empty at all. I did tried the cdc and it works.
My table has multiple primary key, so I tried by providing surrogate key to the payload when inserting the rows into the signal table. I also tried this on new table but still didnt work
Here is my insert query
INSERT INTO public.debezium_signal VALUES ('test2', 'execute-snapshot', '{"data-collections": ["public.testtable"]')
EDIT: it seems it is working for new table with only a few rows. And actually both the initial snapshot and ad hoc snapshot is not working for my older table with 1k+ rows, I suspect this is because the rows is too much or too big
本文标签: apache kafkaDebezium ad hoc snapshot not working because of no maximum keyStack Overflow
版权声明:本文标题:apache kafka - Debezium ad hoc snapshot not working because of no maximum key - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744718100a2621528.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论