Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

Ideas

Showing 576 of 18242

Feasibility to kill the jobs if they are submitted directly in the remote cluster for multi cluster scenario

We can see remote cluster jobs with "bjobs -m <clustername> ..." from another cluster that is a member of MC, no matter the jobs are forwarded to remote cluster or submitted directly in the remote cluster, but we cannot kill the jobs if they...
3 months ago in Spectrum LSF / Cloud Bursting 1 Future consideration

LSF must not ignore empty groups if they are managed by egroup

When running `badmin reconfig`, LSB ignores lines with empty groups: Aug 25 16:22:20 2025 11385 4 10.1 do_Groups: File /opt/lsf/conf/lsbatch/XXXX/configdir/lsb.users at line 50: No valid member in group <XXXX>; ignoring line this is ok for g...
4 months ago in Spectrum LSF / Other 1 Needs more information

Increase available number of global job ids for LSF

Current job id is limited to 32b. With ever growing number of jobs and scaling of multi-cluster use, the job id should be extended beyond 32b to avoid frequent rollover situations and/or cases of redundant ids. Recommend to increase to 64b or more.
10 months ago in Spectrum LSF / Scheduling 0 Planned for future release

Ability to have online messages to detail why go-lives are in queue and pending processing.

Product and Tech support teams would have more visibility with understanding why a go-live is in queue. Specifically, support staff would be able to differentiate deadlock vs BAU processing situations. This would assist staff with determining if a...
over 2 years ago in IBM Safer Payments / Business 0 Future consideration

Allow comments to defined risk lists to be loaded when entries are uploaded with a file

Currently, when a defined risk list is loaded with a file, only the main item of the list can be auto-loaded, comments to this item can be input only manually, which basically means the client will either have to do this for hundreds and thousands...
11 months ago in IBM Safer Payments / Business 0 Under review

Data Manager enhancement: Mark job as DONE after file reaches final destination

We want the main job to be marked as DONE only after the data file has been successfully transferred to the user-specified destination. Currently, with the integration of LSF and Data Manager, the main job is marked as DONE as soon as the 'bstage ...
7 months ago in Spectrum LSF / Data Management 1 Not under consideration

GSLA Resource Loaning - Add the ability to loan to specific users or projects

When a GSLA resource pool is defined for a specific type of workload, it is not practical or desirable to loan to just any other type of job profile. The work-a-round today is to include a resource limit definition to prevent this from occurring. ...
7 months ago in Spectrum LSF / Scheduling 2 Future consideration

Add Date Range to Completed Simulation Result Page

When using the simulation data range of 'server time absolute', you are unable to see what the date range selected was after the simulation completes. You are only able to see the record numbers that were simulated which isn't that useful at times...
4 months ago in IBM Safer Payments / Business 0 Submitted

Job packing on GPUs when MIG is enabled

When MIGs (Multi-instance GPU) are enabled on a multi-GPU server, LSF should pack jobs with smaller number of MIGs on one physical GPU to keep a room for jobs with larger number of MIGs. Currently LSF is distributing the jobs on different physical...
4 months ago in Spectrum LSF / Scheduling 1 Planned for future release

Make vm_type details in lsb. Acct file

vm_type is field is available only on event file . Can this be made available for Acct file as well? We extract lot of job information from Acct file so it will be good if we get vm_type details in the same file
over 1 year ago in Spectrum LSF / Other 0 Future consideration