Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

Ideas

Showing 676

Support pulling data from a Databricks source via a branded origin stage

Johnson & Johnson, a strategic account, is asking for a branded origin stage for Databricks. Currently the only way to ingest data from Databricks is via the JDBC consumer origins, but there are limitations that are causing them issues. Full c...
8 months ago in StreamSets 0 Under review

UI to handle datasets with the same name in different folders

CP4D does not handle datasets with the same name in different folders cleanly. In UI under 'Data sets' only the newer file is presented. /mnts/gpfs.iis/data/yrtest/cms/BMC# ls -la YRtest.ds-rw-rw-r--. 1 1000650000 root 4518 Feb 21 20:19 YRtest.ds#...
8 months ago in DataStage 0 Under review

Sort engines by alphabetical order in the datastage console

We created an engine spdsengqlf05 before the engine spdsengqlf04 and they seem to appear in creation order (or xmeta database order) in the console indode the drop down engine list at top right of the screen.
over 1 year ago in DataStage 0 Under review

Add the ability to whitelist by IP by Organization to Control Hub

This ask is for Control Hub upon login of a particular organization, to check a configuration for that organization that the request is from a particular IP address. Ideally, each web request to Control Hub (or maybe even SDC) would run through a ...
8 months ago in StreamSets 1 Functionality already exists

job comparison between different environments (like in 11.7)

Currently in 11.7 - there is an option to compare the jobs between different environments. It helps to identify what actually changed comparing to the production copy and helps for UAT. The same options is missing in CP4D.
over 1 year ago in DataStage 0 Planned for future release

Require datastage job interim run status in cpdctl dsjob jobinfo command output as same is available in classic datastage dsjob jobinfo option

Require datastage job interim run status in cpdctl dsjob jobinfo command output as same is available in classic datastage dsjob jobinfo option This option is used to get the datastage job interim status which is useful when we are trying to captur...
almost 3 years ago in DataStage 1 Functionality already exists

Mongo "processor" stage (for delete / insert etc) so we can guarantee order of operations

A mongo 'processing' stage that can do inserts / deletes that can be used in the middle of processing rather than only upon destination, effectively to guarantee the order in which this occurs. I'd like to handle an update scenario by first deleti...
9 months ago in StreamSets 0 Under review

Integration of DataStage parameters and parameter sets in Pipelines

Parameters and parameter sets in Pipelines are absolutely necessary for designing dynamic data integrations with DataStage.
almost 3 years ago in DataStage 1 Functionality already exists

Oracle Pluggable database support during installation

Cann't install InfoSphere Information Server without direct port configuration or oracle legacy adjustments(support of SID format) on container database. The internal XMETA database cann't be configured on the pluggable database. Pluggable databas...
almost 10 years ago in DataStage 0 Not under consideration

HA functionality for Datastage Cartridge

Background •In general, the best practice to ensure high availability for OpenShift cluster is to spread the masters and workers across three AZ’s in a cloud. But for DataStage, it only makes sense if the platform allows multiple conductors spread...
over 1 year ago in DataStage 0 Under review