Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


Status Not under consideration
Workspace Spectrum LSF
Created by Guest
Created on Mar 26, 2020

Huge *.out files is getting created in /tmp issue

Sometimes LSF job STDOUT files, such as $JOB_SPOOL_DIR/..out grow to enormous sizes and fill up the filesystem. Neither Linux nor LSF provides a mechanism to limit the size of those specific files without also limiting all other files on the filesystem or limiting all files for a given job. Both of these alternatives are not acceptable since other large data files are required by jobs for various reasons. We need a way to impose a maximum file size limit, such as 20GB, for example, on all LSF job STDOUT files. This limit could be imposed in one of several ways: for the entire cluster using an mbatchd parameter, for a given queue using a queue parameter, for all jobs from a given user using a user shell variable, or job-by-job with a bsub option. We also don't want jobs to be killed when their STDOUT file reaches the limit. Instead, the output stream should simply continue writing over the file without causing the file to grow further or some other solution that does not kill the job or effectively stop it from functioning.

  • Guest
    Reply
    |
    Aug 14, 2020

    Apologies for the delay in responding. This request was submitted under Explorer rather than LSF.

    We've reviewed this but don't have a solution that would keep all of the people happy all of the time.

    * If the content of stdout isn't actually needed, then -o /dev/null would be the simplest approach.
    * Moving the file position back to the start when the limit is reached would result in a messy file that may not be useful to the user as it would be a mix of old and new output.
    * Deleting stdout at the limit and starting again..the user may care about anything that was in the 20GB? Something useful may be deleted.
    * Stop writing the file when the limit is reached - something useful may be list.
    * It seems whatever action is only desirable if there is a critical lack of diskspace and some action is needed to prevent this job, or other jobs dying.

    The "correct" behaviour seems to be very site, user and application specific.

    An alternative approach would be to leverage the Application Watchdog functionality introduced in Service Pack 9. This allows you define an application specific action/script that is periodically run when that specific application is running. It could check disk space, delete files, suspend the job, notify the user or whatever other action is desirable.

    Have you considered this?