๐ง How to Fix [JVM_ATTRIBUTE_NOT_SUPPORTED]
Error in Databricks When Using _jsparkSession
If you’re working with Apache Spark on Databricks and encounter this frustrating error:
[JVM_ATTRIBUTE_NOT_SUPPORTED] Directly accessing the underlying Spark driver JVM using the attribute '_jsparkSession' is not supported on shared clusters.
You’re not alone! This error usually appears when your code (or a function like overwrite_partition()
) tries to tap into internal JVM attributes that aren’t supported in certain cluster configurations.
In this blog, weโll walk through what causes this, how to fix it, and how to configure your Databricks cluster the right way.
๐ Understanding the Error
This error generally looks like this:
overwrite_partition(final_df, 'f1_processed', 'pit_stops', 'race_id')
And the traceback shows:
raise PySparkAttributeError(
error_class="JVM_ATTRIBUTE_NOT_SUPPORTED",
message_parameters={"attr_name": name}
)
This happens because the function tries to access:
spark._jsparkSession
This internal attribute is not allowed on shared clusters for security and sandboxing reasons.
โ Solution: Use a Dedicated (Single User) Cluster with Unrestricted Policy
To bypass this limitation, you need to switch your Databricks cluster to an Access Mode that supports internal JVM access.
Hereโs how to fix it:
1. Go to your cluster settings
Open your cluster in the Databricks UI.
2. Set the access mode to:
Access mode: Dedicated (formerly Single User)
3. Set the policy to:
Policy: Unrestricted
This combination ensures that:
- You’re the only user (so safe to access internal attributes)
- You are not restricted from touching JVM-level components
4. Restart the cluster
Once youโve made changes, click on Confirm and Restart
to apply them.
โ Screenshot Example
Hereโs what your final cluster settings should look like:

- Policy: Unrestricted
- Access Mode: Dedicated (formerly Single User)
- Runtime: 15.4 LTS (Scala 2.12, Spark 3.5.0)
๐ก Bonus Tip: Avoid _jsparkSession
When Possible
While switching to a dedicated cluster works, a better long-term approach is to avoid internal attributes altogether.
Hereโs a safer way to write your logic using only the public Spark API:
final_df.write \
.mode("overwrite") \
.partitionBy("race_id") \
.format("delta") \
.saveAsTable("f1_processed.pit_stops")
This avoids relying on private internals and ensures better compatibility across all cluster types.
๐ Final Thoughts
- ๐ If you’re on a shared or secure cluster, accessing
_jsparkSession
will always fail. - ๐ ๏ธ Switching to a dedicated cluster with unrestricted policy solves the issue.
- ๐ป Best practice: use public APIs like
spark.write.partitionBy().saveAsTable()
wherever possible.