admin管理员组

文章数量:1125911

We have data written to S3 in Hudi format with dt partition. Recently, we started receiving very large numbers for some columns stored as long datatype. These numbers exceeded the maximum limit of the long datatype, resulting in null values in the Parquet files and Presto table for those columns. To address this, we decided to change the datatype to double. Since Hudi schema evolution supports long/bigint to double, we were able to write the data in the new format successfully to S3.

However, when creating a Presto table with new schema on top of this location and trying to query the Presto table, we are facing datatype mismatch issues because the older partitions still have the old datatype. For example:

select * from tableA where dt= date '2025-01-07'

This is working as the schema of dt=2025-01-07 partition data in s3 matches with the table schema.

select * from tableA

This doesn't work and gives the error: The column col1 of table tableA is declared as type double, but the Parquet file (s3a://abc/xyz.parquet) declares the column as type INT64 as the remaining partition's data still have the old datatype in s3.

Is there any workaround for this?

Tried long->string too but still faced the same issue.

本文标签: apache sparkError due to S3 Partitions having different Datatype in Hudi Presto TableStack Overflow