Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

DataStage

Showing 501 of 15535

Native Connector for accessing data with Delta on S3 in DataStage Classic

We are building an Open Lakehouse with the data stored with Delta in S3 Storage. Watsonx.data uses the same Open Lakehouse approaches, but is using Iceberg instead of Delta. DataStage as an ETL tool should support a wide range of different technol...
over 1 year ago in DataStage 1

Currently the developer needs pod level access to write python code on Cloud Pak for Data DataStage. It will help if python code can be added to DataStage jobs through a UI with developer level access.

Developers are trying to call python from DataStage. The wrapped stage has the ability to call python in parallel execution mode. However the script as we see resides on the px engine pod and hence is not practical for developers to use. Other opt...
8 months ago in DataStage 0 Submitted

MDM Connector for CP4D-DataStage

MDM connector will use API calls that will be directly integrated with MDM server resulting in better performance when processing large data volume. Alternate method for MDM connector is to use web service call which involves xml parsing or using ...
about 1 year ago in DataStage 0 Submitted

Improve the performance of Teradata Connector when doing BULK read

Teradata bulk export of ~7 million rows from Datastage using TD connector takes over ~30 minutes as compared on using TPT export (that is also executed on the DS engine) which only took around ~3-4 minutes. The timing differences are significant, ...
5 months ago in DataStage 1 Submitted

Add business terms on DataStage flow

On IGC (11.x) -WKC (4.x) migration projects Clients asks to add business terms on datastage next gen flow but now this feature is not available using DataStage next gen flow on new interface.
over 2 years ago in DataStage 0 Future consideration

Functional difference between Datastage V11.7 and V11.5.

The client has just migrated their Datastage jobs to V11.7 but they have an issue with variable names in the migrated rules. Although the lab managed to reproduce the issue, they responded two days ago stating that they do not have a workaround fo...
5 months ago in DataStage 0 Submitted

Require datastage job interim run status in cpdctl dsjob jobinfo command output as same is available in classic datastage dsjob jobinfo option

Require datastage job interim run status in cpdctl dsjob jobinfo command output as same is available in classic datastage dsjob jobinfo option This option is used to get the datastage job interim status which is useful when we are trying to captur...
almost 2 years ago in DataStage 1 Functionality already exists

Integration of DataStage parameters and parameter sets in Pipelines

Parameters and parameter sets in Pipelines are absolutely necessary for designing dynamic data integrations with DataStage.
almost 2 years ago in DataStage 1 Functionality already exists

[SAP Pack] ABAP Extract Stage's non-default ftp port setup support request

hello I want to use ftp, the data transfer method of ABAP Extract Stage. There is no ability to specify ftp port. The use of the ftp default port 21/tcp shows up as a vulnerability in security audits, so a different port must be used. The ibm case...
over 1 year ago in DataStage 0 Not under consideration

Request to implement Auto Map functionality on Output in Hierarchical Stage

I'm using the Hierarchical Data stage to pull data via a REST API and then feed that through a JSON_Parser stage and finally map it to the output columns on the Output tab. I know in the DataStage thick client there was an option to use the Auto M...
over 3 years ago in DataStage 4