Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

Connectivity

Showing 165

MFA Authentication to snowflake using OKTA

In our SNOWFLAKE Environment, Password login is disabled and authentication is through OKTA only for all users. When we are trying import a table from snowflake, it is asking for username and credentials. Since, users doesn't have credentials, the...
12 days ago in Connectivity 0 Submitted

Provide more informative error messages for database connection issues in the UI.

Currently, users encounter vague error messages in the UI when facing database connection issues, which necessitates accessing CP4D logs to identify the root cause. By enhancing the clarity and detail of these error messages, users will have a mor...
about 1 month ago in Connectivity 0 Submitted

Enhance the "Schema filter" field functionality for IBM Cloud Database for MongoDB.

It would be beneficial if this field could display a list of available schemas, as currently, a connection cannot be created without populating the "Schema filter" field. Often, users may not know the exact schema name in advance, so it would be i...
22 days ago in Connectivity 0 Submitted

QRadar: Maintain Last Position in Cisco FMC Log Collection via eStreamer Protocol

Description: When collecting logs from Cisco Firepower Management Center (FMC) using the eStreamer protocol, an issue arises if the FMC system undergoes a reboot. During the FMC reboot, traffic logs generated on Cisco Firewalls are sent after the ...
22 days ago in Connectivity 0 Submitted

Native connector for Yellowbrick bulk transactions

A native bulk loader connector for Yellowbrick would help our customers to improve the performance of the ETL processes that need to move large amounts of data to and from the Yellowbrick Data Warehouse. This connector should work like the nzload ...
about 1 year ago in Connectivity 2 Future consideration

DataDirect jdbc driver for mongodb support balanced string

Then this idea stems from the fact that the customer wanted to connect to mongodb through our jdbc driver, however their mongodb team provided him with a balanced string with a replicaset parameter that allows you to replicate data on the affected...
over 4 years ago in Connectivity 1 Not under consideration

Allow writing from the platform through a Hive connector

As we develop models on the cloud pak platform, we re implementing the following flow: - reading data tables on our on-premise datalake from a hive connector - training a ML model from a notebook environment - writing the output of models back to ...
about 2 months ago in Connectivity 0 Under review

A Trino platform connector is needed for Cloud Pak for Data

Client is in need of a platform connector for Cloud Pak for Data for Trino as a data source. Please note Trino reference documentation: https://trino.io/docs/current/security.html
2 months ago in Connectivity 0 Planned for future release

Connection Pooling for DataStage

We wish us that DataStage has the possibility to use a connection pool. We have thousands of connects and disconnects in Db2 that stresses out LDAP and use a lot of resources in Db2. It also costs a lot of unnecessary time.
over 3 years ago in Connectivity 3 Not under consideration

Add a platform-level connector for Microsoft Azure Fabric lakehouse

We would like to establish the connection between WKC and Microsoft Azure fabric lakehouse. The purpose is to pull the metadata from this MS platform and catalog using IBM's WKC platform. This is the top need of our business. We would like to have...
9 months ago in Connectivity 3 Planned for future release