Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

DataStage

Showing 10 of 14745

Option to write informational or warning messages to flow log

Several customers use this option currently in DataStage 11.7. to write calculated values like UserStatus to the Job Log. In the past Basic Functions like DSLogWarn and DSLogInfo are available (https://www.ibm.com/docs/en/iis/11.7?topic=programmin...
2 days ago in Cloud Pak for Data / DataStage 0 Submitted

Operator role at the project level

In traditional Datastage (Information Server), it's possible to assign users and/or groups the Operator (or SuperOperator) role where the user cannot modify the project but can run jobs. A similar role doesn't seem to be possible in Datastage Next...
3 months ago in Cloud Pak for Data / DataStage 0 Submitted

Read SQL from file in Teradata Connector Stage

Teradata Connector in CP4D does not have an option to read an SQL from File. Currently tenants are using different stages like Execution command and user variable in the sequencer to dynamically pass the SQL reading it from the file, we need an op...
29 days ago in Cloud Pak for Data / DataStage 0 Submitted

Job Details view window limitations

There are actually two problems regarding this Job Details view: 1. There is no easy navigation from a given design view of a flow or pipeline to the corresponding Job Details window 2. There is no way back from this Job Details window back into t...
about 1 month ago in Cloud Pak for Data / DataStage 0 Submitted

Support for DataStage in Git-integrated Projects

As of 4.8, DataStage is not supported in Default Git Integrated projects. A large Federal Customer requests the ability to leverage DataStage in Git-integrated projects. This is critical for the customer to adopt a consistent CI/CD framework acros...
about 1 month ago in Cloud Pak for Data / DataStage 0 Submitted

Need CI/CD and Version Control Tool (Git) Integration (MettleCI) option in CP4D.

We have been demonstrted Mettle CI solution for integration with GIT/Automatic testing/CI/CD for Datastage 11.7 and said it would be available for IBM Cloud pak Data as well in future releases of CP4D : We have below questions on this topic 1)when...
about 1 month ago in Cloud Pak for Data / DataStage 0 Submitted

Duplicate flow to put new flow in same folder as original flow

Currently when duplicating a flow in CP4D the new flow is put in the root folder and not in the same folder as the original flow. Would be better if it worked like legacy and put the new flow in same folder
about 2 months ago in Cloud Pak for Data / DataStage 0 Submitted

Allow using “GCS Staging” when reading from Google BigQuery using read-method "General"

When using the Google BigQuery connector in read-method "Select Statement" it is possible to configure a bucket (Google Cloud Storage = GCS) for better performance. When using the read-method "General" such a configuration is not available. It wou...
2 months ago in Cloud Pak for Data / DataStage 0 Under review

Run Metrics Feature displays incorrect Results on CP4D

We should be able to find rows information when writing to Target Dataset stage on DataStage
2 months ago in Cloud Pak for Data / DataStage 0 Submitted

Currently the developer needs pod level access to write python code on Cloud Pak for Data DataStage. It will help if python code can be added to DataStage jobs through a UI with developer level access.

Developers are trying to call python from DataStage. The wrapped stage has the ability to call python in parallel execution mode. However the script as we see resides on the px engine pod and hence is not practical for developers to use. Other opt...
2 months ago in Cloud Pak for Data / DataStage 0 Submitted