admin管理员组文章数量:1305910
I face the attached problem when reading an orc file:
Is it possible to change this buffer size of 65536 to the needed one of 1817279? Which configuration values do I have to adapt in order to set this value?
I did not find the correct configuration value in the documentation.
I face the attached problem when reading an orc file:
Is it possible to change this buffer size of 65536 to the needed one of 1817279? Which configuration values do I have to adapt in order to set this value?
I did not find the correct configuration value in the documentation.
Share Improve this question edited Feb 3 at 18:51 f_puras 2,5044 gold badges36 silver badges46 bronze badges asked Feb 3 at 16:21 Ruben HartensteinRuben Hartenstein 11 bronze badge1 Answer
Reset to default 0The exceeding buffer-size is related to an issue with the HDFS Erasure Coding (EC) file-encoding. See this issue at Apache ORC:
- Buffer size too small. size · Issue #1939 · apache/orc
It could be traced back to the Hadoop HDFS bug:
- [HDFS-17535] I have confirmed the EC corrupt file, can this corrupt file be restored? - ASF JIRA
So, check your Hadoop HDFS version if it is affected by this bug.
本文标签: javaApache ORC buffer size too smallStack Overflow
版权声明:本文标题:java - Apache ORC buffer size too small - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741810632a2398771.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论