Skip to Main Content
IBM Data and AI Ideas Portal for Customers

This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea

Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal ( - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal ( - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM. - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at

Status Delivered
Workspace Spectrum LSF
Components Scheduling
Created by Guest
Created on Oct 31, 2022

allow to limit cpu usage to the number of slots requested

Hello,, currently, when a user submit a job, there's no way in LSF to limit the cpu consumption. recently, we discover that it's possible to limit cpu usage OS side using cgroup, by putting a value in the 'cpu.cfs_quota_us' file of the '/sys/fs/cgroup/cpu/lsf/gnb/job.' directory (multiple of 'cpu.cfs_period_us' file): cd /sys/fs/cgroup/cpu/lsf/gnb/job. #multiply cpu.cfs_period_us value by the number of cpus to limit #ex to limit to 20 cores: cat cpu.cfs_period_us 100000 echo `expr 20 \* \`cat cpu.cfs_period_us\` ` > cpu.cfs_quota_us cat cpu.cfs_quota_us 2000000 in this ex, the sum of the cpu consumed by the job won't be able to go over 2000% (viewable using top), corresponding to a -n20 bsub option. maybe it's possible to change the effect of the 'LSB_RESOURCE_ENFORCE' parameter ! smart additional feature, is possible: a factor 'tolerance' like the CPU_FACTOR parameter: a way to weight the user request: 'factor_value' x 'slot_request' or 'slot request' + 'factor_value' (both ways are interesting). for our 20 slots job ex: 20x1.05 or 20+1 to limit the cpu consumption to 21 cpus (2100% from top) with this new feature, we will reduce drastically cpu overload on servers and guaranty jobs run time for all end-users. Regards, Olivier PS: my last request, the 'cores limitations using affinity [SPCLSF-I-1439]' is less versatile: tasks jobs are bound on cpu and we cannot allocate more cpus than the number available on the server.
Needed By Month
  • Admin
    Bill McMillan
    Nov 23, 2023

    Available as build601691 and will be included in Service Pack 15

  • Admin
    Bill McMillan
    Aug 7, 2023

    Support for cpu.shares will be added in 2H23 to allow more flexible allocation.