Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

Ideas

Showing 14 of 18234

Use the engines of the lakehouse in Git integrated projects

We have an instance of watsonx.data in CP4D, installed on Openshift in Azure Cloud. Within watsonx.data we have buckets representing azure containers within a Azure Blob storage. In projects that are integrated with Git, when we need to configure ...

A Github integration ensures enterprise-grade governance, automation, and agility for data engineering workflows. By leveraging GitHub and LevelUp tooling, clients gain robust version control, rollback capabilities, and automated testing/deployment for pipelines and DAGs. It reduces operational risk, accelerates delivery, and aligns with modern DevOps practices.

Large enterprise clients like AT&T, who manage complex data pipelines across hybrid environments, will benefit from streamlined CI/CD processes, improved security, and consistent governance across services. How should it work? GitHub Integra...

Ability to disable job requeuing in google resource connector when jobs exit with TERM_RC_RECLAIM

We have jobs that should not be requeued after partially running due to corrupting the job output data. We would like to have a queue level parameter to disable requeuing of jobs that exit for TERM_RC_RECLAIM. To be consistent with other features,...
7 months ago in Spectrum LSF / Cloud Bursting 1 Future consideration

Support new connection type for watsonx.data and support for Spark engines in it

Documentation: https://ibm-my.sharepoint.com/:w:/p/shrinivas_kulkarni_in/EcUbCoYhyVRJmWrwscDVUx8B0jxHp5cqCLtr4E7xuCGDMw?e=tbj8w8 Future State Support for new connection type to point to a specific Spark Engine of watsonx.data, either on SaaS or so...

Feasibility to kill the jobs if they are submitted directly in the remote cluster for multi cluster scenario

We can see remote cluster jobs with "bjobs -m <clustername> ..." from another cluster that is a member of MC, no matter the jobs are forwarded to remote cluster or submitted directly in the remote cluster, but we cannot kill the jobs if they...
3 months ago in Spectrum LSF / Cloud Bursting 1 Future consideration

Support MIG declaration in resource connector templates for gpu VMs

We use to provision new GPU hosts (having MIG enabled using post install) in cloud using resource connector but when the job contains "mig" in the gpu resource requirement, it does not provision the host. When checked upon to declare MIG in the re...
10 months ago in Spectrum LSF / Cloud Bursting 0 Planned for future release

Request for Product Enhancement – Low Latency Data Processing Capabilities in watsonx.data

The customer is currently using watsonx.data to support a high-partition, high-ingestion analytics platform. While the system is now stable after several patches and fixes, they are looking ahead to future enhancements that can meet their true low...

Resource connector provider configuration file ( such as cyclecloudprov_config.json) should have encrypted passwords/credentials

Security leak will be avioded. Clear text password are risky to put;.
about 3 years ago in Spectrum LSF / Cloud Bursting 0 Future consideration

Forward, without local processing, bsub options to remote clusters

In multi-cluster scenarios, there should be no expectation that all the clusters are configured the same and/or support the same resources, slas, architectures, etc. Currently, when launching jobs with -sla, -m, and other options, the local schedu...
over 1 year ago in Spectrum LSF / Cloud Bursting 1 Future consideration

LSF Resource Connecter should allow start/stop of *existing* AWS VMs.

We have procedural limitations with creating new instances in AWS on-demand by LSF Resource Connector (RC) for security reasons. Our preference would be to create instances ahead of time and let LSF RC start and stop the instances based on demand....
over 2 years ago in Spectrum LSF / Cloud Bursting 3 Functionality already exists