Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


Status Not under consideration
Workspace Spectrum LSF
Components Scheduling
Created by Guest
Created on Feb 7, 2023

Enable option for hardcoding GPU memory request like its available for CPU memory request

At this moment IBM offers features of hardcoding CPU memory. We have implemented this feature in our cluster with esub scripting. Similarly we want to have same feature available for GPU memmory (gmem) to be hardcoded. I have raised ticket to our support portal and got reply its not available.

Please plan to implement this feature in LSF. This is one of the main drawback of LSF at this moment and and our users are in high need of it

Needed By Quarter
  • Admin
    Bill McMillan
    Reply
    |
    Feb 9, 2023

    We have discussed this matter with NVIDIA and there is no mechanism currently in their API to enforce gpu memory usage. Their recommendation is that if you need to use less memory than the card provides, then using a card that supports MIG, and subdividing the GPU is the way to go.

    As there is no underlying way to implement this, we cannot progress this request.

    We can revisit it in the future, as and when the NVIDIA API provides a way to do it.


  • Guest
    Reply
    |
    Feb 8, 2023

    Yes, we want a bsub -Mg (max gmem) or bsub -gpu "memlimit:xxxG" like option that will prevent user's GPU processes from exceeding their request.