Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



Status Submitted
Workspace Analytics Engine
Created by Guest
Created on Nov 5, 2025

Spark job to be able to access, read and write on external database without the credentials being exposed anywhere (instance, logs, job metadata, etc…)

We want to create a job on a Spark instance of the Analytics Engine powered by Apache Spark that runs a Spark application reading from and writing to an Exasol database. To enable database access, we need to pass our user credentials into the job.

Our challenge is that so far, we haven’t found any way to provide these credentials without exposing them somewhere — whether in the instance itself, in the logs, on the history server, or within the job metadata.

The KMS documentation mainly covers encryption of Parquet files and key lifecycle management, which doesn’t address how we need to handle database credentials in Analytics Engine jobs. The section about using secrets from vaults in connections works well for CP4D connections in projects and notebooks, but in our case, Spark jobs don’t have access to connection assets in Projects.

The general vaults overview talks about integration with external vaults, but we don’t have access to external vaults. It also doesn’t give us a concrete way to safely inject secrets into AE jobs without exposing them in logs or metadata. Additionally, the CP4D vault itself isn’t directly accessible from inside a Spark job, because that would require passing a token into the job — which again exposes sensitive information in the job metadata.

What we really need is clear guidance on how to supply credentials (or tokens) to AE jobs in a way that they are not stored in the job metadata. Passing them via arguments or environment variables still exposes the credentials in job metadata, so that’s not a viable solution for us.

With the current system behaviour, our data analysts are not allowed to use a Spark instance for the analysis of data in data bases. That is a significant limitation for us. 

Needed By Quarter