admin管理员组文章数量:1391995
I am trying to understand the following graph databricks is showing me and failing:
What is that constant lightly shaded area close to 138GB? It is not explained in the "Usage type" legend. The job is running completely on the driver node, not utilizing any of the Spark worker nodes, it's just a Python script. I know that memory usage of ~138GB is real because job was failing on a 128GB driver node and seems to be happy on 256GB driver.
It is a race between SO and Databrick community now!
I am trying to understand the following graph databricks is showing me and failing:
What is that constant lightly shaded area close to 138GB? It is not explained in the "Usage type" legend. The job is running completely on the driver node, not utilizing any of the Spark worker nodes, it's just a Python script. I know that memory usage of ~138GB is real because job was failing on a 128GB driver node and seems to be happy on 256GB driver.
It is a race between SO and Databrick community now! https://community.databricks/t5/administration-architecture/help-undersanding-ram-utilization-graph/m-p/112864#M3139
Share Improve this question edited Mar 17 at 23:53 MK. asked Mar 14 at 15:15 MK.MK. 34.6k19 gold badges79 silver badges114 bronze badges1 Answer
Reset to default 1One thing that might explain the discrepancy in the total memory numbers could be that if you leave the Compute drop down at default it will average the metrics across all nodes in the cluster. Make sure you're selecting just the driver node when you want to see metrics for just that node.
Cluster Metrics - View Metrics at the Node Level
本文标签: databricks memory metrics graphStack Overflow
版权声明:本文标题:databricks memory metrics graph - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744647289a2617472.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论