Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

All ideas

Showing 14759

DataStage : Trino connector

We need a connector for Trino in InfoSphere Datastage as Trino is not supported officially by IBM. PMR No : TS016068863
about 2 months ago in DataStage 0 Under review

SYSPROC.DB2LK_GENERATE_DDL to become a supported and documented feature and to match db2look

We are using db2 LUW V11.5M8FP0_SB35599 on Linux It is a DPF database. We are using the following 2 db2 tools to generate the DDL of the same database: 1) db2look 2) SYSPROC.DB2LK_GENERATE_DDL - the command used for db2look is the following: db2lo...
6 months ago in Db2 / Db2 on Cloud 1 Submitted

#emea #csm #mea Integrate IBM Knowledge Catalog Governance into DataStage Flows

Issue: Currently, in IBM Cloud Pak for Data, data protection rules defined within Watson Knowledge Catalog are not directly applied to DataStage flows, even when governed assets from the catalog are used. This creates confusion and a sense that no...
4 months ago in DataStage 3 Future consideration

Jobs API - have synchronous call to launch Datastage jobs

One of our customer (CNP in France) is evaluating the CP4D-DataStage. They have DataStage 11.5 and actually migrating to 11.7. They have a framework (BASIC code that orchestrates the DataStage jobs) that is the centerpiece of their DataStage proje...
9 months ago in DataStage 0 Submitted

High Contrast Option in PAW for Color Blind Users - Accessibility Enhancement

I have a user who is struggling with differentiating rows in a cube view and reporting in PAW. The current white/grey coloring of every other row does not work for the user with color blindness, when trying to differentiate rows in a report over m...
4 months ago in Planning Analytics 1 Future consideration

TLS 1.3 Support

Hi QMF Team!TLS 1.3 Support is being a key to increase more and more Security. According to IBM CISO at January of 2024 will be a requirement. Reference: https://pages.github.ibm.com/ciso-psg/main/supplemental/acceptable_encryption.html#TLSVersion...
over 1 year ago in QMF for z/OS 1 Planned for future release

Automate the clean up of files in Cloud pak for data

In a customer case after only 9 month the file-api-claim had over 800k files and it was suspected that we had a performance issue. We disabled selinux relabling But we also ran the cron job cleanup script. But this was not enough. We had to run mo...
14 days ago in Cloud Pak for Data / Cloud Pak for Data Platform 0 Submitted

The file-api-claim can become a bottlekneck due to all files converging to one PVC

In multiple customer cases, there have been issues due to high file cout. As workaround the selinux relabling disabling allows tome relief to this issue. Using file-api-claim could lead to performance bottlekneck in the future for very large envir...
14 days ago in Cloud Pak for Data / Cloud Pak for Data Platform 1 Submitted

Configuration setting to limit size of temp files (uda*) created on Cognos server under temp directory

Why is it useful? In busy production environments when users have ability to create custom adhoc reports based on Cognos packages, there's a possibility that a report with missing filter condition could create huge temp files and fill up the disk ...
14 days ago in Cognos Analytics / Administration 0 Submitted

DataStage - SurrogateKey file stage - Database sequence for Snowflake database

On CP4D DataStage dataflow, Surrogate Key File Stage supports DB Sequence for DB2 & Oracle databases only. As we migrated to SnowFlake databse from Oracle we need to read the next val using a SnowFlake database sequence; But we do NOT see an o...
14 days ago in DataStage 0 Under review