Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

DataStage

Showing 492 of 15426

Integration of DataStage parameters and parameter sets in Pipelines

Parameters and parameter sets in Pipelines are absolutely necessary for designing dynamic data integrations with DataStage.
over 1 year ago in DataStage 1 Functionality already exists

[SAP Pack] ABAP Extract Stage's non-default ftp port setup support request

hello I want to use ftp, the data transfer method of ABAP Extract Stage. There is no ability to specify ftp port. The use of the ftp default port 21/tcp shows up as a vulnerability in security audits, so a different port must be used. The ibm case...
over 1 year ago in DataStage 0 Not under consideration

DataStage - SurrogateKey file stage - Database sequence for Snowflake database

On CP4D DataStage dataflow, Surrogate Key File Stage supports DB Sequence for DB2 & Oracle databases only. As we migrated to SnowFlake databse from Oracle we need to read the next val using a SnowFlake database sequence; But we do NOT see an o...
5 months ago in DataStage 0 Planned for future release

Before/After SQL mentioned in oracle connector is not displaying in the job log

In oracle connector, if we have mentioned any SQL query in before/After SQL, that mentioned query is not displaying in the job log instead its showing like the query has been executed. It will be helpful if we see the actual query in the job log. ...
11 months ago in DataStage 0 Submitted

Request to implement Auto Map functionality on Output in Hierarchical Stage

I'm using the Hierarchical Data stage to pull data via a REST API and then feed that through a JSON_Parser stage and finally map it to the output columns on the Output tab. I know in the DataStage thick client there was an option to use the Auto M...
over 3 years ago in DataStage 4

Databricks connector

Adding a Databricks connector to DataStage could bring several benefits. Here are a few reasons why it could be a good idea: Enhanced Data Integration: A Databricks connector would allow seamless integration between DataStage and Databricks, enabl...
5 months ago in DataStage 0 Functionality already exists

Excel file format should be supported as source data in the ETL project.

Excel file format should be supported as source data in the ETL project.
about 5 years ago in DataStage 2 Functionality already exists

Read SQL from file in Teradata Connector Stage

Teradata Connector in CP4D does not have an option to read an SQL from File. Currently tenants are using different stages like Execution command and user variable in the sequencer to dynamically pass the SQL reading it from the file, we need an op...
6 months ago in DataStage 0 Under review

create a command allowing to update parameters inside a datastage flow or pipeline in a project

We are requesting an option to cpdctl dsjob command (suggestion) which would allow to change the value of the parameter in the datastage flow or pipeline (not to confuse with a Parameter or Parameter set or environment variable). After migration f...
6 months ago in DataStage 0 Under review

Please Add Red Hat 9 as the support OS for IBM DataStage 11.7 or the next available version of DataStage

We are currently on DataStage 11.7.1 on Red Hat Linux 7.9 which will be out of support soon. We need to migrate our DataStage installation for Red Hat version 9. We need DataStage 11.7 to support Red Hat 9.
over 1 year ago in DataStage 2 Planned for future release