admin管理员组文章数量:1125911
We have data written to S3 in Hudi format with dt partition. Recently, we started receiving very large numbers for some columns stored as long datatype. These numbers exceeded the maximum limit of the long datatype, resulting in null values in the Parquet files and Presto table for those columns. To address this, we decided to change the datatype to double. Since Hudi schema evolution supports long/bigint to double, we were able to write the data in the new format successfully to S3.
However, when creating a Presto table with new schema on top of this location and trying to query the Presto table, we are facing datatype mismatch issues because the older partitions still have the old datatype. For example:
select * from tableA where dt= date '2025-01-07'
This is working as the schema of dt=2025-01-07
partition data in s3 matches with the table schema.
select * from tableA
This doesn't work and gives the error: The column col1 of table tableA
is declared as type double, but the Parquet file (s3a://abc/xyz.parquet) declares the column as type INT64 as the remaining partition's data still have the old datatype in s3.
Is there any workaround for this?
Tried long->string too but still faced the same issue.
本文标签: apache sparkError due to S3 Partitions having different Datatype in Hudi Presto TableStack Overflow
版权声明:本文标题:apache spark - Error due to S3 Partitions having different Datatype in Hudi Presto Table - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736676822a1947224.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论