Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

Ideas

Showing 567 of 18244

Integrate GitHub source control with Information Server DataStage and BigIntegrate

Need GitHub integrated with DataStage/BigIntegrate for version control and deployment
about 8 years ago in DataStage 1 Functionality already exists

Open DataStage Flows directly from Watson Pipeline Orchestration Flow

Currently, DataStage developers cannot open a parallel job directly from a sequence job. In this case, providing a way to go directly to the DataStage flow from the orchestration pipeline that is invoking the job would reduce the number of hops/cl...
about 4 years ago in DataStage 0 Future consideration

Duplicate flow to put new flow in same folder as original flow

Currently when duplicating a flow in CP4D the new flow is put in the root folder and not in the same folder as the original flow. Would be better if it worked like legacy and put the new flow in same folder
over 1 year ago in DataStage 0 Submitted

Compilation of pipelines from GUI

Requesting for an option to compile pipelines from UI. We are successfully able to compile the DataStage flows from UI but looking for same for pipelines as well. We are compiling pipelines from command line using CPDCTL commands as a workaround.
10 months ago in DataStage 0

we need Migration path for Teradata -> Snowflake connector via RJUT ( Rapid job update tool)

We are looking for a solution for the change of Teradata connector which are used in out DataStage jobs , to Snowflake connector as a part of planning the lift and shift of KVH1 towards cloud. Can you please investigate from your side , if this ca...
over 1 year ago in DataStage 0 Under review

Read / Write images using Amazon S3

To load images into / Read images from, S3 (compatible) Object Store using DataStage 11.7 on-prim environment
10 months ago in DataStage 1 Submitted

Ice berg table format and connector for datastage Infosphere Information server:Generic- 11.7.1.4

Hello Team, We have datastage 11.7.1.4 implemeted on premises on bare metal server. We have database on DB2 DPF IIAS 2C2S in RDBMS. Customer is planning to migrate DB2 RDBMS data to Iceberg table format via datastage to private cloud. If we can ha...
10 months ago in DataStage 1 Not under consideration

Request to easily integrate custom scripts into the managed, SaaS DataStage Runtime

Our company is using Cloud Pak for Data as a managed service, which means we don't have direct access to the OpenShift platform or any of the pods or servers that run the Cloud Pak services, like the DataStage server. Occasionally with on-prem Dat...
about 5 years ago in DataStage 2 Not under consideration

Currently the developer needs pod level access to write python code on Cloud Pak for Data DataStage. It will help if python code can be added to DataStage jobs through a UI with developer level access.

Developers are trying to call python from DataStage. The wrapped stage has the ability to call python in parallel execution mode. However the script as we see resides on the px engine pod and hence is not practical for developers to use. Other opt...
over 1 year ago in DataStage 0 Submitted

Enhance DataStage Flow Designer Integration with Git

We have started using Git for maintaining our DataStage codebase. We're able to publish/load the jobs to/from Git repo successfully. However, if we want to publish multiple jobs to the Git repository at once, current version of DataStage Flow Desi...
about 5 years ago in DataStage 0 Not under consideration