Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

DataStage

Showing 499 of 15597

Require SAP Business Warehouse functionality

This was/is a feature in 11.7 and is not yet in Datastage Cartridge/NextGen. Needed for customer work.
4 months ago in DataStage 0 Future consideration

better control while removing parameters

TS013848268 fix allows to remove multiple parameters. It is easy to make the mistake while selecting multiple variables across multiple pages. Request to have at least a confirmation list of the parameters which are going to be removed before comm...
7 months ago in DataStage 0 Under review

DataStage (IIS) kafka connector to connect to RBAC enabled Kafka cluster

BOA has a huge Kafka cluster and as per GIS ( Global Information Security), all new platform need to have RBAC enabled and this being the case, we would want to connect DataStage (IIS) using Kafka connector to connect to the Kafka cluster ( with R...
11 months ago in DataStage 0 Under review

Support for DataStage in Git-integrated Projects

As of 4.8, DataStage is not supported in Default Git Integrated projects. A large Federal Customer requests the ability to leverage DataStage in Git-integrated projects. This is critical for the customer to adopt a consistent CI/CD framework acros...
7 months ago in DataStage 1 Submitted

.Net Framework 4.8 Support

.Net Framework 4.8 is often pre-installed on modern PC servers, but Information Server 11.7.1 does not support .Net Framework 4.8. .Net Framework 4.8 should be supported as soon as possible.
about 1 year ago in DataStage 0 Not under consideration

Allow "Asset Broswer" node in datastage on cp4d to have access to connections that is inside the catalog

It is useful to search for connections using the Asset Broswer node within the DataStage ETL flow designer. However, the Asset Broswer currently can ONLY find the connections that are added as assets within the project that the flow belongs in. Th...
over 1 year ago in DataStage 0 Not under consideration

Native Connector for accessing data with Delta on S3 in DataStage Classic

We are building an Open Lakehouse with the data stored with Delta in S3 Storage. Watsonx.data uses the same Open Lakehouse approaches, but is using Iceberg instead of Delta. DataStage as an ETL tool should support a wide range of different technol...
over 1 year ago in DataStage 1

Duplicate flow to put new flow in same folder as original flow

Currently when duplicating a flow in CP4D the new flow is put in the root folder and not in the same folder as the original flow. Would be better if it worked like legacy and put the new flow in same folder
8 months ago in DataStage 0 Submitted

we need Migration path for Teradata -> Snowflake connector via RJUT ( Rapid job update tool)

We are looking for a solution for the change of Teradata connector which are used in out DataStage jobs , to Snowflake connector as a part of planning the lift and shift of KVH1 towards cloud. Can you please investigate from your side , if this ca...
8 months ago in DataStage 0 Under review

Currently the developer needs pod level access to write python code on Cloud Pak for Data DataStage. It will help if python code can be added to DataStage jobs through a UI with developer level access.

Developers are trying to call python from DataStage. The wrapped stage has the ability to call python in parallel execution mode. However the script as we see resides on the px engine pod and hence is not practical for developers to use. Other opt...
9 months ago in DataStage 0 Submitted