Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

All ideas

Showing 15047

Automated Reference Data Synchronization with Datalake HIVE Tables

Problem Statement: Currently, synchronizing Reference Data with Datalake and accessible from Hive Tables is a manual process involving data export from IKC and manual transfer to the Azure cloud platform. This approach is inefficient and error-pro...
less than a minute ago in Cloud Pak for Data / Cloud Pak for Data System 0 Submitted

Automated Reference Data Synchronization with Azure Databricks Delta Tables

Problem Statement: Currently, synchronizing Reference Data with Azure Databricks Delta Tables is a manual process involving data export from IKC and manual transfer to the Azure cloud platform. This approach is inefficient and error-prone. Propose...
7 minutes ago in Cloud Pak for Data / Cloud Pak for Data System 0 Submitted

Enhanced Data Export File Format Support for Reference Data

Problem Statement: Currently, Reference data export functionality is limited to CSV format, restricting its versatility and usability for various data destinations. Proposed Solution: Expand Reference data export capabilities to support a wider ra...
39 minutes ago in Cloud Pak for Data / Cloud Pak for Data System 0 Submitted

Enhanced Data Import File Format Support for Reference Data

Problem Statement: Currently, Reference data import functionality is limited to CSV format, restricting its versatility and usability for various data sources. Proposed Solution: Expand RDM's data import capabilities to support a wider range of co...
about 1 hour ago in Cloud Pak for Data / Cloud Pak for Data System 0 Submitted

Automated RD Type Modification with Data Migration

Problem Statement: Currently, modifying RD Types in IBM-RDMs requires creating new RD Sets and RD Types, which can be inefficient and error-prone. Additionally, handling reference data during RD Type modifications involves manual steps, leading to...
about 1 hour ago in Cloud Pak for Data / Cloud Pak for Data System 0 Submitted

Automated MAPs/SETs Update with Reference Data Handling

Problem Statement: Currently, updating MAPs/SETs in IBM requires a manual process of exporting reference data, updating MAPs/SETs, publishing new MAPs/SETs, and loading data back. This process is time-consuming and error-prone. Proposed Solution: ...
about 1 hour ago in Cloud Pak for Data / Cloud Pak for Data System 0 Submitted

Add ability to delete orphaned jobs on Platform

Currently, the only way to delete orphaned jobs is to remove the directories using these commands: rm -rf /data/pipelines/<pipeline ID> rm -rf /data/runInfo/<pipeline ID> Please add the ability to delete orphaned jobs for either the fo...
2 days ago in StreamSets 0 Submitted

PostgreSQL CDC Client should support array fields

Array fields in the PostgreSQL CDC Client are unsupported. They are returned as strings or not at all depending on the configuration due to unsupported data types. When we add [] after any data type definition: i.e. INT -> INT[] means it's now ...
2 days ago in StreamSets 0 Submitted

Enhancement request - IBM Cloud Pak for Data – Redshift Connector

Hi IBM team, Kindly find an enhancement request for - IBM Cloud Pak for Data – Redshift Connector Problem: Using IBM Cloud Pak for Data –Redshift Connector (with WRITE MODE=LAOD), empty source tables are not getting loaded to target AWS-Redshift ....
2 days ago in Cloud Pak for Data / DataStage 0 Submitted

Secure BQ CDC service-account-key

Is there a way to configure a secure way to retrieve the service-account-key.json file from key vault? This is the requirement for our production environment setup. Enter BigQuery private key json file [/data/install/IIDR/GCP_BIGQUERY/gcp_connecti...
3 days ago in Replication: Change Data Capture 0 Submitted