Skip to content

Clashing volumes when SecretClass is used multiple times #653

@sbernauer

Description

@sbernauer

Affected Stackable version

?

Affected Apache Spark-on-Kubernetes version

?

Current and expected behavior

We noticed an issue with Spark jobs using two separate S3 connections (one for logs and one for application data). Both connections reference the same SecretClass for TLS.
This results in the Spark job having two identical volume definitions with the same name, which causes the deployment to fail. We expected that referencing the same SecretClass wouldn't break the job spec. Would it be possible for the Spark Operator to check for duplicate volume definitions and deduplicate them instead of failing?

stackable_operator::logging::controller: Failed to reconcile object controller.name="sparkapplication.spark.stackable.tech" error=reconciler for object SparkApplication.v1alpha1.spark.stackable.tech/myjob.mynamespace failed error.sources=[failed to apply Job, unable to patch resource "myjob", ApiError: failed to create typed patch object (mynamespace/myjob; batch/v1, Kind=Job): .spec.template.spec.volumes: duplicate entries for key [name="XXX-tls-ca-bundle"]: (ErrorResponse { status: "Failure", message: "failed to create typed patch object (mynamespace/myjob; batch/v1, Kind=Job): .spec.template.spec.volumes: duplicate entries for key [name=\"XXX-tls-ca-bundle\"]", reason: "", code: 500 }), failed to create typed patch object (mynamespace/myjob; batch/v1, Kind=Job): .spec.template...

Possible solution

No response

Additional context

https://stackable-workspace.slack.com/archives/C08GM6S8Z8D/p1770737102217349

Environment

No response

Would you like to work on fixing this bug?

None

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions