Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

Ideas

Showing 69 of 18234

Spark job to be able to access, read and write on external database without the credentials being exposed anywhere (instance, logs, job metadata, etc…)

We want to create a job on a Spark instance of the Analytics Engine powered by Apache Spark that runs a Spark application reading from and writing to an Exasol database. To enable database access, we need to pass our user credentials into the job....
about 1 month ago in Analytics Engine 0 Submitted

Ability to paste '-' characters as 0.

0 values formatted as a '-' in Excel cannot be pasted into Planning Analytics. This causes a spreading error. Users often format 0 values as a '-'. It would be helpful if Planning Analytics could interpret '-' as a 0 on paste.
28 days ago in Analytics Engine 0 Submitted

Add a Trace Security in order to understand how a right is evaluated by the engine

As for business rules, PAL/TM1, security could be a mix of data and/or rules on the system security objects. For business rules, we can use the "trace calculations" which is really a great feature. For security, we can only use... our brain so a "...
over 2 years ago in Analytics Engine 3 Not under consideration

MDE Job is failed with error "Deployment not found with given id"

We found the issue related to the Spark/MDE blocker issue. MDE job triggered a spark-runtime deployment first The spark-runtime deployment then triggered corresponding spark-master/worker pods If the 1st spark-master/worker pod does not come up wi...
over 1 year ago in Analytics Engine 0 Under review

Ability to resize the columns across the job page, as well as any other columns in order to see text that is too long

Column Resizing so that we can see the full line of text and not have it cut off if the text is too long. The ability to resize would help for copying and pasting pieces of the string or the entire string itself.
10 months ago in Analytics Engine 1 Submitted

Reservation of nodes for spark job/ Labeling, Taint and Toleration of workers and spark pods 

As an CP4d service provides we would like to use special worker nodes only for spark job execution called via the pyspark API. One reason for this is, that we sometimes see that the spark jobs use up all resources. In this scenario we would like a...
about 3 years ago in Analytics Engine 0 Not under consideration

Request is for ELIMS to be imported into LSF Simulator

Currently LSF Simulator only permits LIMS resources, not ELIMS. All LSF clusters use both LIMS and ELIMS therefore there why would Simulator only support LIMS resources and not ELIMS
about 1 year ago in Spectrum LSF / Simulation & Prediction 1 Not under consideration

Include method to build ls_sim license files to required format

For LSF clusters that employ license scheduler... During the direct importing of a cluster configuration from the LSF Cognitive GUI, or when generating a tar.gz file for importing, build the license files into the ls_sim directory automatically, o...
8 months ago in Spectrum LSF / Simulation & Prediction 1 Future consideration

support auth mode 3 for scheduling forecasts in PAW

scheduling a forecast is not supported in Planning Analytics Local configured with Windows integrated authentication (mode 3). compare with: https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=forecasts-schedule-baseline-forecast since ther...
over 1 year ago in Analytics Engine 0 Submitted

Allow displaying spark CPU and memory values on Monitoring page

Currently Spark runtimes do not register the pods on the Spark side with the runtimes framework. So, Monitoring page show only 1vCPU and 1G RAM regardless of the size of the Spark environment choosen is only the proxy pod on the Jupyter Client sid...
over 3 years ago in Analytics Engine 1 Future consideration