Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

DataStage

Showing 499 of 15523

Allow APT_EXECUTION_MODE = "One process" for production environments

In our tests we found for small PX jobs the execution is sped up by almost 30% if we set APT_EXECUTION_MODE = One process. Since we have 10000 of such small job invocations in aggregate this saves a lot of time and resources. But according to IBM ...
10 months ago in DataStage 1 Under review

Please add Hierarchical Stage Assembly Editor in DataStage Designer without Adobe Flash

Currently the Hierarchical Stage Assembly Editor in DataStage Designer requires Adobe Flash. The new Data Flow Designer by no means matches the capabilities and richness of DataStage Designer. Please provide the capability of editing the Assembly ...
about 4 years ago in DataStage 5 Not under consideration

DataStage (IIS) kafka connector to connect to RBAC enabled Kafka cluster

BOA has a huge Kafka cluster and as per GIS ( Global Information Security), all new platform need to have RBAC enabled and this being the case, we would want to connect DataStage (IIS) using Kafka connector to connect to the Kafka cluster ( with R...
10 months ago in DataStage 0 Under review

Support for DataStage in Git-integrated Projects

As of 4.8, DataStage is not supported in Default Git Integrated projects. A large Federal Customer requests the ability to leverage DataStage in Git-integrated projects. This is critical for the customer to adopt a consistent CI/CD framework acros...
7 months ago in DataStage 1 Submitted

.Net Framework 4.8 Support

.Net Framework 4.8 is often pre-installed on modern PC servers, but Information Server 11.7.1 does not support .Net Framework 4.8. .Net Framework 4.8 should be supported as soon as possible.
about 1 year ago in DataStage 0 Not under consideration

Allow "Asset Broswer" node in datastage on cp4d to have access to connections that is inside the catalog

It is useful to search for connections using the Asset Broswer node within the DataStage ETL flow designer. However, the Asset Broswer currently can ONLY find the connections that are added as assets within the project that the flow belongs in. Th...
about 1 year ago in DataStage 0 Not under consideration

Duplicate flow to put new flow in same folder as original flow

Currently when duplicating a flow in CP4D the new flow is put in the root folder and not in the same folder as the original flow. Would be better if it worked like legacy and put the new flow in same folder
7 months ago in DataStage 0 Submitted

we need Migration path for Teradata -> Snowflake connector via RJUT ( Rapid job update tool)

We are looking for a solution for the change of Teradata connector which are used in out DataStage jobs , to Snowflake connector as a part of planning the lift and shift of KVH1 towards cloud. Can you please investigate from your side , if this ca...
8 months ago in DataStage 0 Under review

Native Connector for accessing data with Delta on S3 in DataStage Classic

We are building an Open Lakehouse with the data stored with Delta in S3 Storage. Watsonx.data uses the same Open Lakehouse approaches, but is using Iceberg instead of Delta. DataStage as an ETL tool should support a wide range of different technol...
about 1 year ago in DataStage 1

Currently the developer needs pod level access to write python code on Cloud Pak for Data DataStage. It will help if python code can be added to DataStage jobs through a UI with developer level access.

Developers are trying to call python from DataStage. The wrapped stage has the ability to call python in parallel execution mode. However the script as we see resides on the px engine pod and hence is not practical for developers to use. Other opt...
8 months ago in DataStage 0 Submitted