admin管理员组文章数量:1201604
Using spark 3.5.4 running in kubernetes cluster.
The job reads from json files and writes to parquet
ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from /ip:port is closed
java.util.concurrent.RejectedExecutionException: Task Future(<not completed>) rejected from java.util.concurrent.ThreadPoolExecutor@7d24f025[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4939]
at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065)
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833)
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365)
at scala.concurrent.impl.ExecutionContextImpl.execute(ExecutionContextImpl.scala:21)
at scala.concurrent.impl.Promise$Transformation.submitWithValue(Promise.scala:429)
at scala.concurrent.impl.Promise$DefaultPromise.submitWithValue(Promise.scala:338)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete0(Promise.scala:285)
at scala.concurrent.impl.Promise$Transformation.handleFailure(Promise.scala:444)
Tried to increase "sparkwork.timeout", "50000" didn't help.
Tried several configs like this:
.config("spark.shuffle.io.retryWait", "1200s")
.config("sparkwork.timeout", "50000")
.config("spark.executor.heartbeatInterval", "1000")
本文标签: Still have 1 requests outstanding when connection when running spark with huge datasetStack Overflow
版权声明:本文标题:Still have 1 requests outstanding when connection when running spark with huge dataset - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1738636088a2104041.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论