admin管理员组

文章数量:1123141

I’m using the Debezium JDBC Sink Connector to ingest data from a Kafka topic into PostgreSQL. The database is experiencing high CPU utilization, primarily due to frequent metadata queries made by the connector, rather than actual INSERT statements.

How can I minimize or avoid these metadata queries to reduce the CPU load?

Sample Metadata Queries:

SELECT _ AS table_cat, n.nspname AS table_schem, c.relname AS table_name, CASE (n.nspname ~ _) OR (n.nspname = _) WHEN _ THEN CASE WHEN (n.nspname = _) OR (n.nspname = _) THEN CASE c.relkind WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ ELSE _ END WHEN n.nspname = _ THEN CASE c.relkind WHEN _ THEN _ WHEN _ THEN _ ELSE _ END ELSE CASE c.relkind WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ ELSE _ END END WHEN _ THEN CASE c.relkind WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ WHEN _ THEN _ ELSE _ END ELSE _ END AS table_type,
d.description AS remarks, _ AS type_cat, _ AS type_schem, _ AS type_name, _ AS self_referencing_col_name,
_ AS ref_generation
 FROM pg_catalog.pg_namespace AS n, pg_catalog.pg_class AS c LEFT
   JOIN pg_catalog.pg_description AS d
    ON (((c.oid = d.objoid)
    AND (d.objsubid = _))
    AND (d.classoid = _::REGCLASS))
   WHERE (c.relnamespace = n.oid)
    AND (c.relname LIKE _)
 ORDER BY table_type, table_schem, table_name
SELECT *
 FROM (SELECT n.nspname, c.relname, a.attname, a.atttypid, a.attnotnull OR ((t.typtype = _)
    AND t.typnotnull) AS attnotnull, a.atttypmod, a.attlen, t.typtypmod, row_number() OVER (PARTITION BY a.attrelid
 ORDER BY a.attnum) AS attnum, NULLIF(a.attidentity, _) AS attidentity, NULLIF(a.attgenerated, _) AS attgenerated,
pg_catalog.pg_get_expr(def.adbin, def.adrelid) AS adsrc, dsc.description, t.typbasetype, t.typtype
 FROM pg_catalog.pg_namespace AS n
   JOIN pg_catalog.pg_class AS c
    ON (c.relnamespace = n.oid)
   JOIN pg_catalog.pg_attribute AS a
    ON (a.attrelid = c.oid)
   JOIN pg_catalog.pg_type AS t
    ON (a.atttypid = t.oid) LEFT
   JOIN pg_catalog.pg_attrdef AS def
    ON ((a.attrelid = def.adrelid)
    AND (a.attnum = def.adnum)) LEFT
   JOIN pg_catalog.pg_description AS dsc
    ON ((c.oid = dsc.objoid)
    AND (a.attnum = dsc.objsubid)) LEFT
   JOIN pg_catalog.pg_class AS dc
    ON ((dc.oid = dsc.classoid)
    AND (dc.relname = _)) LEFT
   JOIN pg_catalog.pg_namespace AS dn
    ON ((dc.relnamespace = dn.oid)
    AND (dn.nspname = _))
   WHERE (((c.relkind IN (, __more_))
    AND (a.attnum > _))
    AND (NOT a.attisdropped))
    AND (c.relname LIKE _)) AS c
   WHERE _
 ORDER BY nspname, c.relname, attnum

Connector Configuration (YAML):

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
  name: kafka-connector-sensordata
  namespace: spc-kafka
  labels:
    strimzi.io/cluster: kafka-crdb-connect
spec:
  autoRestart:
    enabled: true
  class: io.debezium.connector.jdbc.JdbcSinkConnector
  tasksMax: 10
  config:
    connection.password: XXXXXXX
    connection.url: jdbc:postgresql://XXXXXXX:26257/db_name
    connection.username: user213
    delete.enabled: 'false'
    errors.deadletterqueue.context.headers.enable: true
    errors.deadletterqueue.topic.name: dlq_sensordata_topic
    errors.tolerance: all
    insert.mode: insert
    key.converter: org.apache.kafka.connect.json.JsonConverter
    primary.key.mode: none
    schema.evolution: none
    table.name.format: ${topic}
    topics: sensordata
    value.converter: org.apache.kafka.connect.json.JsonConverter
    flush.failure.max.retries : 10
    flush.failure.retries.wait.ms : 60000
    flush.max.retries : 10
    flush.retry.delay.ms : 60000
    errors.log.enable: true 
    errors.max.retries: 10
    retriable.restart.connector.wait.ms: 60000

Any suggestions or configuration tips would be highly appreciated!

本文标签: postgresqlHow to Reduce Metadata Queries in Debezium JDBC Sink ConnectorStack Overflow