admin管理员组

文章数量:1122832

My question is more conceptual and relating to how Databricks allocates compute resource.

I'm experiencing an issue where, when I attempt to run a series of pandas aggregations on a small dataset (300k rows) in Databricks, the cell runs for an extended period then the kernel dies and the notebook detaches.

This typically happens when another Databricks user is running relatively intensive spark I/O operations and other compute intensive operations. When there is only the user running the pandas aggregations active on the cluster, the operations run just fine. I've checked the metrics tab in Databricks and there are no memory issues/spikes, we're barely running at 30% of RAM.

Has anyone else seen any similar issues and is there a way around this? We have rewritten the code to leverage pyspark but this comes with a lot of overhead and takes much longer to run than the pandas version, and is completely overkill for the size of dataset.

My question is more conceptual and relating to how Databricks allocates compute resource.

I'm experiencing an issue where, when I attempt to run a series of pandas aggregations on a small dataset (300k rows) in Databricks, the cell runs for an extended period then the kernel dies and the notebook detaches.

This typically happens when another Databricks user is running relatively intensive spark I/O operations and other compute intensive operations. When there is only the user running the pandas aggregations active on the cluster, the operations run just fine. I've checked the metrics tab in Databricks and there are no memory issues/spikes, we're barely running at 30% of RAM.

Has anyone else seen any similar issues and is there a way around this? We have rewritten the code to leverage pyspark but this comes with a lot of overhead and takes much longer to run than the pandas version, and is completely overkill for the size of dataset.

Share Improve this question asked Nov 21, 2024 at 11:19 Tom SmithTom Smith 2163 silver badges14 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

The issue is Driver Node overloading. To determine the exact reason, check driver logs to identify specific bottlenecks, such as CPU starvation or task queuing. This can indicate whether the issue is CPU, I/O, or something else.

Please share Driver logs and let me know if you need any information.

本文标签: pythonDatabricks notebook kernel dying when running Pandas aggregationsStack Overflow