Skip to Main Content
IBM Data and AI Ideas Portal for Customers


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea

Help IBM prioritize your ideas and requests

The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.

Receive notification on the decision

Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.

Additional Information

To view our roadmaps: http://ibm.biz/Data-and-AI-Roadmaps

Reminder: This is not the place to submit defects or support needs, please use normal support channel for these cases

IBM Employees:

The correct URL for entering your ideas is: https://hybridcloudunit-internal.ideas.aha.io


ADD A NEW IDEA

FILTER BY CATEGORY

DataStage

Showing 336

IS 11.5 DRS Connector (with ODBC flavor) to support multiple table names

DRS Connector (with ODBC flavor) isnot support multiple table names with the auto generated SQL query.We want to implement this functionality
about 6 years ago in DataStage 1 Not under consideration

Dynamic File Splitter Stage

I would like a file stage that would dynamically create flat files based on a field or a combination of fields within the input data. By dynamic I mean that if the field being used to determine the split today has 3 unique values then 3 separate f...
about 6 years ago in DataStage 0 Future consideration

Addition of the BASIC function DSGetJobQueue

Addition of the BASIC function DSGetJobQueue. There is no way to retrieve the WLM queue of a job at runtime using the DataStage BASIC interface. We can set it with DSSetQueue but retrieval is impossible.
about 6 years ago in DataStage 0 Not under consideration

Default WLM queue at the sequence level

Add the functionality to set a design-time default WLM queue name at the sequence level.
about 6 years ago in DataStage 0 Not under consideration

Data Direct ODBC/JDBC driver for redshift

DataStage 11.3.1.1 is trying to connect Amazon RedShift database. We need to have ODBC or JDBC driver installed in production environment. Our preference is to have IBM certified or supported ODBC Driver for linux. Please provide driver either cer...
about 6 years ago in DataStage 0 Functionality already exists

iSeries support DB2 UDB API stage DataStage 11.5

We would like the DB2 UDB API stage to be certified for use against the iSeries database on DataStage 11.5 to allow for a gradual migration of the existing jobs using the DB2 UDB API stage to the DB2 connector
about 6 years ago in DataStage 0 Future consideration

Is there a way to automate mapping of credentials in IIS Console?

We use LDAP user registry and map our unix password every time it is updated. Is there a way to automate the mapping of credentials in IIS Console? Most of our access issues come from unable to map the new unix password to the IIS Console.
about 6 years ago in DataStage 0 Not under consideration

Better and intuitive error messages

Better error message in Parallel jobs Error messages often doesn't give any values for developer e.g TR_lag_record,0: Fatal Error: ~APT_PMPlayer: terminating Player (node1, parallel APT_TransformOperatorImplV0S69_CopyOfJOB_G57_LINDORFF_LAAN_DAG_TR...
about 6 years ago in DataStage 0 Future consideration

Allow ODBC connector write to HIVE table /or File connector read HIVE table

1) ODBC connector for HIVe allows reading hive tables but doesn't work for writing hive table. It works - only very very slow and not acceptable for Big Data operation ( a few records in minutes). 1a. Additionally, If columns defined in UPPER case...
over 6 years ago in DataStage 0 Functionality already exists

Create patch or Test and Cerfity for DataStage 8.7 to support Teradata 15.0 and Teradata 15.10

Our company Uses DataStage v8.7, we are planning to upgrade our Teradata to version 15.0 or 15.10. So, want IBM to test and certify if Datastage V8.7 supports Teradata Version 15.0 and 15.10.
over 6 years ago in DataStage 0 Not under consideration