Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



Status Future consideration
Workspace Spectrum LSF
Components Administration
Created by Guest
Created on Aug 21, 2025

Queue level cgroup enablement

Hi - we have a large cluster that services many different types of workloads. To better "bucket" these workloads, we have queues set up with rules that define how the jobs must run (or be submitted). One problem we have is that some of the workloads really need to have their compute reservations enforced (via cgroups) such that they don't impede on other jobs running on the same machine. We want to enable cgroups, but this must be done at a cluster level which impacts the entire set of queues including jobs which cannot be so strictly bounded. I am requesting this feature to enable cgroups at a queue level (lsb.queues) rather than only at a cluster level (lsf.conf).

 

Thanks!

Needed By Month
  • Guest
    Nov 25, 2025

    CPU Shares is the way to go.  Then, use numa affinity on every job same[numa] basically with the exception for real large memory jobs.

  • Admin
    Bill McMillan
    Oct 3, 2025

    Robert,
    From your support ticket, the first three parameters do need to be set globally as the control how we collect resource information from the host.

    LSF_LINUX_CGROUP_ACCT=Y 
    LSF_REPLACE_PIM_WITH_LINUX_CGROUP=Y
    LSB_RESOURCE_ENFORCE="cpu memory"

    if the job itself has no cpulimit or memory limit specified then the 3rd parameter has no effect.


    Prior to service pack 14, the job would only be moved to its own group if affinity[core(1)] was added to the resource requirements.

    The parameter

    LSF_CGROUP_CORE_AUTO_CREATE=Y 

    was added to simplify that, in that many users (and admins) often forgot to add it, then wondered why the job not been strictly bound.

    So the simplest solution is to leave that parameter as N, and add affinity[core(1)] to the jobs/apps that you want strictly bound.

    In service pack 15, cpu.shares and cpu.weight are also set automatically to give the job a minimum cpu time allocation proportional to the cores asked/total cores. This can be scaled by using CGROUP_CPU_SHARES_FACTOR or disabled by LSB_CGROUP_CPU_SHARES_OLD_DEFAULT=Y . This does not require the job to be bound to specific cores.

    There will be some additional functionality, possibly later this quarter, which will also enable a maximum percentage share of the host, again, without the need to bind to specific cores (so auto_create=n).


    We will consider moving LSF_CGROU_CORE_AUTO_CREATE to an application/queue level parameter in a future release.