admin管理员组

文章数量:1122846

I am trying to perform data load on to druid from external sql (postgres) storage. Below is how my query looks like:

INSERT INTO test
SELECT
   Id
  ,ClusterId
FROM TABLE(
    EXTERN(
      '{"type":"sql","inputSource":{"type":"postgresql","database":{"type":"postgresql","connectorConfig":{"connectURI":"jdbc:postgresql://host:5432/db_name","user":"user","password":"pwd"}}},"sqls":["SELECT * FROM Table"]}',
      '{ "type":"json" }',
      '[{"name":"Id","type":"string"},{"name":"ClusterId","type":"long"}]'
    )
  )
PARTITIONED BY HOUR
CLUSTERED BY ClusterId

I got next error:

Invalid value for the field [inputSource]. Reason: [Cannot construct instance of org.apache.druid.metadata.input.SqlInputSource, problem: SQL Metadata Connector not configured! at [Source: (String)"{"type":"sql",.....

I have used setup for quick start, below is my environment file core settings, just for reference

My assumption is I missed jdbc driver which I need to put inside extensions folder and specify it here - druid_extensions_loadList. Possible I need to put it inside /opt/druid/lib folder of the broker container (see image below), should I update classpath then?

Which exact dependency should I load and which corresponding changes should be done? Thanks.

P.S

Also looks like sqls property should be root property, not part of the inputSource like in docs

I am trying to perform data load on to druid from external sql (postgres) storage. Below is how my query looks like:

INSERT INTO test
SELECT
   Id
  ,ClusterId
FROM TABLE(
    EXTERN(
      '{"type":"sql","inputSource":{"type":"postgresql","database":{"type":"postgresql","connectorConfig":{"connectURI":"jdbc:postgresql://host:5432/db_name","user":"user","password":"pwd"}}},"sqls":["SELECT * FROM Table"]}',
      '{ "type":"json" }',
      '[{"name":"Id","type":"string"},{"name":"ClusterId","type":"long"}]'
    )
  )
PARTITIONED BY HOUR
CLUSTERED BY ClusterId

I got next error:

Invalid value for the field [inputSource]. Reason: [Cannot construct instance of org.apache.druid.metadata.input.SqlInputSource, problem: SQL Metadata Connector not configured! at [Source: (String)"{"type":"sql",.....

I have used setup for quick start, below is my environment file core settings, just for reference

My assumption is I missed jdbc driver which I need to put inside extensions folder and specify it here - druid_extensions_loadList. Possible I need to put it inside /opt/druid/lib folder of the broker container (see image below), should I update classpath then?

Which exact dependency should I load and which corresponding changes should be done? Thanks.

P.S

Also looks like sqls property should be root property, not part of the inputSource like in docs

Share Improve this question edited yesterday Anton Putau asked yesterday Anton PutauAnton Putau 7622 gold badges8 silver badges36 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

You have "postgresql-metadata-storage" included in the druid_extensions_loadList, which is correct. make sure that the corresponding JAR file (PostgreSQL JDBC driver) is present in the /opt/druid/extensions/postgresql/ directory. If the postgresql directory under extensions doesn't exist, create it and place the JAR file there. Metadata Storage Configuration:

Your metadata storage configuration looks correct:

druid_metadata_storage_type=postgresql
druid_metadata_storage_connector_connectURI=jdbc:postgresql://postgres:5432/druid
druid_metadata_storage_connector_user=druid
druid_metadata_storage_connector_password=FoolishPassword

Ensure that:

  • The database druid exists in PostgreSQL.
  • The user druid has the necessary permissions (e.g., CREATE, INSERT, UPDATE, SELECT) on the database.

Place the postgresql-.jar in the following directory:

/opt/druid/extensions/postgresql/

If you're running Druid in Docker, use this command to copy the driver into the container:

docker cp postgresql-<version>.jar <container_id>:/opt/druid/extensions/postgresql/

After making changes, restart all Druid services (Coordinator, Overlord, MiddleManager, Broker, etc.) to apply the updated configuration:

docker restart <container_id>

Based on your earlier mention, the sqls property should likely be at the root level in the EXTERN configuration. You can update your SQL query as follows:

INSERT INTO test
SELECT
  Id,
  ClusterId
FROM TABLE(
    EXTERN(
      '{
        "type":"sql", 
        "sqls":["SELECT Id, ClusterId FROM Table"],
        "inputSource":{
          "type":"postgresql",
          "database":{
            "type":"postgresql",
            "connectorConfig":{
              "connectURI":"jdbc:postgresql://postgres:5432/db_name",
              "user":"user",
              "password":"pwd"
            }
          }
        }
      }',
      '{ "type":"json" }',
      '[{"name":"Id","type":"string"},{"name":"ClusterId","type":"long"}]'
    )
)
PARTITIONED BY HOUR
CLUSTERED BY ClusterId;

本文标签: druidSQL Metadata Connector not configuredStack Overflow