Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



Status Future consideration
Workspace Spectrum LSF
Components Scheduling
Created by Guest
Created on Oct 2, 2024

Map memory reservation to cgroup memory.low

Mapping the memory reservation of a job to the cgroup memory.low could protect jobs from running short on memory, while in parallel allowing more memory to be used if it's available. The soft limit instead of a hard limit should prevent users to be always on the safe side by reserving (far) too much memory. A hard limit can still be useful to prevent excessive over-usage.

Needed By Quarter
  • Guest
    Jan 19, 2026

    Sorry, I replied to your first message, not the last. Our jobs have a very broad range of memory usage, and often it's hard to estimate the actual usage up front. Setting memory.max or memory.high would then motivate the users to be on the safe side and reserve sufficiently that all jobs in their batch run smooth. We frequently have cases where out of 1000 jobs, 990 use 10Gbyte and 10 have a short peak at 30Gbyte. Then users would ask 40Gbyte for all.

    With memory.min and memory.low however, we can give a guarantee to the job, so that we can use the estimate without needing to add a safety margin. 

    Memory.max and memory.high would apply to all LSF jobs combined, e.g. /sys/fs/cgroup/lsf/memory.{max,high} to protect the server (like sshd, sssd, and even the LSF daemons themselves). In case all jobs go over their reservation, those going over most will be partly paged out. We have tools which are, and which are not very sensitive to paging.

    Can we have a call about this topic?

  • Guest
    Jan 19, 2026

    What conflict are you referring to? Isn't it that if memory.low (or memory.min) is set to the memory reservation, adding up all memory.low settings for LSF jobs will not exceed the server's memory?

    Especially with large systems with e.g. 192 cores and 1.5Tbyte RAM, having a memory protection like this is valuable. For example, a single large-memory job for which the right amount of memory is reserved, must be protected from a batch of 100 jobs all going "only" 10Gbyte over the 5Gbyte reservation.

    The good thing of memory.low over memory.max is that the kernel OOM killer will not intervene if it's not needed.

    How can a memory reservation be changed to exceed the available ram? Isn't that a strange edge case?

  • Admin
    Bill McMillan
    Nov 20, 2025

    Michel,

    The general examples for memory.min and memory.low are for ensuring critical processes (such as an in memory database) are kept in memory and not starved of memory or swapped out.

    memory.max : hard limit, above this the OOM will kill the process.
    
    memory.high: soft limit, tries not to page below this.
    memory.low : soft limit, kernel tries to keep this much in memory, but can still be paged out.
    memory.min : hard limit, guaranteed minimum, cannot be swapped out, will force the OOM to reclaim/kill something else in other cgroups


    Today, we effectively set memory.max to the hard limit for the job, and memory.high to the reservation. e.g.

    • memory.max : bsub -M 16GB

    • memory.high : bsub -R "rusage[mem=12GB]"

    We don't see a generic use case for setting low, min, e.g.

    • memory.low = 8GB

    • memory.min = 4GB

    For a generic job, what would be the value to saying (for example) 1/4 of it's max memory is guaranteed never to be swapped out. If the system is under extreme memory pressure, stopping the OOM from killing a job is probably not the best idea.

    Do you have an example of how you would want to use this?


    -Bill

  • Guest
    Nov 17, 2025

    I think it should be possible to set some boundary conditions or restrictions, where some changes are not possible. Over committing on memory.low (or memory.min) must not be possible; but should be on memory.high or memory.max. Aligning the memory.{min|low} setting with the rusage[mem=...] could provide this automatically.

    Another thing to keep in mind is that the out of memory killer may work independently from the cgroups. Ideally the job exceeding memory.{min|low} most should be terminated in case of a memory overload.

  • Guest
    Oct 8, 2024

    I understand your example. But isn't it that with the soft limit, some memory will be paged out, instead of killing a job? Also we could mitigate this somewhat by increasing the low-memory limit on the host, so that e.g. on a 64 Gbyte host, only 60Gbyte can be reserved?

  • Admin
    Bill McMillan
    Oct 7, 2024

    We did previously consider this when we were adding cgroup2 support, something along the lines of:

    • memory.high = -M hard_memory_limit

    • memory.low = rusage[mem=xyz]

    From various dev discussion forums, this approach would appear to work well when there were just a few jobs on the machine; but on a high core count machine, with many jobs (and thus many cgroups) there were concerns over how memory management would behave with multiple conflicts.

    E.g. if the machine was 64GB, and you have 16 jobs with memory.low=4GB, and they all actually consumed it, such that the system.slice was critically low in memory, it could result in the OOM killer killing processes from all 16, rather than just picking one.

    Even with weighting all the cgroups, you could conceivably have the edge case where the user for job1 does a bmod to rusage[mem=16gb] which then starts forcing others to swap or be killed.

    That said, there have been quite a few enhancements to the cgroup controller over the last few years, and this is something that would be worth revisiting as a future roadmap project.