This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updateson them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
hostcache auto-delete function in the mbatchd restart
Dynamic hosts remain in the cluster unless you intentionally remove them.And the manual related to it is as follows.https://www.ibm.com/docs/en/spectrum-lsf/10.1.0?topic=cluster-remove-dynamic-hostsSuppose we delete the hostcache file, not modify ...
In a dynamic cluster cloud environment, blaunch cannot handle resizing of the jobs. We are requesting for the addition and deletion of hosts from an LSF cluster with blaunch able to handle the network IO and exit with the return code from the job'...
allow to limit cpu usage to the number of slots requested
currently, when a user submit a job, there's no way in LSF to limit the cpu consumption.
recently, we discover that it's possible to limit cpu usage OS side using cgroup, by putting a value in the 'cpu.cfs_quota_us' file of the '/sys/fs/...
Have LSF Handle ZOMBI job searching using either BTREE or HASH ALGO's
Large number of ZOMBI jobs can impact very large clusters operations resulting in high backlogs and additional lost job status. Improving the ZOMBI search algorithm from Linear Search to BTREE or HASH would reduce this impact.
Enhance the Support Tool to be able to remove ZOMBI jobs and create a log of exec hosts and pgids that need to be cleaned
We had an issue where LSF could not keep up with dispatch due to high numbers of ZOMBI jobs. We would like the ability to shut down LSF and clean up the ZOMBI jobs via a Database repair to accelerate the time-to-restore of the LSF clusters.
Do not place IBM confidential, company confidential, or personal information into any field.