Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

Ideas

Showing 676

Issue with Kafka connector within DataStage.

Current State: When a DataStage job fails while reading from a Kafka connector, the records read from the queue before the runtime failure are marked as "read" and removed from the queue, despite the record not being passed out of the connector an...
almost 3 years ago in DataStage 0 Is a defect

connector to Active MQ - AMQ

We would like to have a connector for JMS and/or Active MQ for DataStage. AMQ can support multiple APIs and protocols. Is it possible to build custom connector/logic to connect AMQ from Datastage ? https://access.redhat.com/documentation/en-us/red...
almost 3 years ago in DataStage 0 Not under consideration

Utility for multiple job compile at server level

Hi Team, I'm looking for a Utility at server level for multiple job compile. I want to know whether DataStage have any utility for mutilple job compile at server level? I know that we have dscc.exe at client level to perform compilation! I want th...
almost 9 years ago in DataStage 0 Functionality already exists

Record level Information Required in DSODB Database

We are pulling Job level information from DSODB and looking for Record level Information but we do not see the required information. Please let us know how to pull the following: 1. Average_Record_Length 2. Input_Volume_In_Bytes 3. Output_Volume_I...
about 9 years ago in DataStage 0 Not under consideration

Support for Avro Logical Data Types in File Connector Stage

Using Datastage 11.5.0.1 "File Connector Stage", We ask to implement the mapping from an input field defined as Decimal (17,2) to a target avro field defined as "type":["bytes","null"],"logicalType":"decimal","precision":"17","scale":"2" into the ...
about 9 years ago in DataStage 0 Not under consideration

Support for Reject link on Kafka connector.

We have a requirement to publish message on a Kafka topic from a database and only publish them once. Problem is when using the Kafka connector as target it don't support a reject link. So how do I get a notice if message fails? Kafka Connector se...
about 3 years ago in DataStage 0 Not under consideration

Relocate the BQ connector connection information from bottom of log to top of log

Currently, when using the BQ connector in DataStage On-Prem, the connection information used to connect to the GOOGLE BQ environment, including the transaction ID is placed in the log at the end of the job process. It would be more helpful to have...
about 3 years ago in DataStage 0 Not under consideration

Infosphere datastage connecting to SCALITY - AWS S3 compatible storage

Hello Team, As per our client requirement, We need to establish the connectivity to SCALITY (AWS S3 compatible storage) from datastage 11.7.1 / 11.7.1.3 to place all our archival files into cloud storage. For that POC, we used S3 connector stage &...
about 3 years ago in DataStage 0 Not under consideration

mapping of jobs directories from Data Flow designer to Bitbucket folders

As per our requirement, DS Job code should be checked into the "Datastage" Folder available in BitBucket feature branch. However what we see is that a new folder called DS Projects with path "Training/Jobs" gets created instead.
about 6 years ago in DataStage 1 Not under consideration

Data flow Designer Allows only changed or new jobs to be published to git repository

Currently Data flow designer allows same version of job to be published to git multiple times which results redundant versions in git repository. DFD should only allow git publish if a job is changed or it's a new job
about 6 years ago in DataStage 0 Not under consideration