This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
IBM Employees should enter Ideas at https://ideas.ibm.com
Olivier - a patch for this is now available: ls-10.1-build602016
I would consider writing a wrapper to emulate lmstat from RTM data. It's not too difficult, but it would awesome just to have LS be able to switch directions with a credential and fallback to lmstat/rlmstat if the service (RTM) was down.
It would be real nice if we could flip a switch and get the data directly from RTM. Our RTM servers have very accurate data and are up-to-date and very reponsive. It would take milliseconds to get all the data that can at times take minutes when using lmstat and rlmstat.
Hi Olivier!
Larry
Thanks for the feedback. We'll considere this in our future planning.
Hi Bill,
as discussed with MingLiang ZU from your company this morning, our usage of LS is limited to few feature of hundreds per license services. By doing a lmstat -a per license services, we create a load x1000 vs lmstat -f <feature> per features.
we already put in place a wrapper for a license service that, sometime, take more than 10mns for a lmstat -a (not acceptable for LS usage ) vs 15s when doing a lmstat per requested feature (2 of 414 features in use, for 3k vs 11k tokens in use).
maybe a switch parameter, per SD, to select the way of work could be a solution ?
otherwise, triggering a lmstat -f if the features are listed in the SD could be another solution.
In term of robustness, the wrapper solution is so dangerous: we must take care of any change in the lmstat output, any change in the feature used in LS, any change in license servers list, etc ...
we would appreciate a robust solution.
Regards,
Olivier
Hi Olivier, a very long time ago we used to query by feature, which did put a higher load on lmgrd and could cause instability in the license quorum - so we moved to the query all approach. While it can be slower, it seemed to have less impact on lmgrd itself.
What you have suggested could be achieved with a wrapper around blcollect/lmstat/lmutil:
blcollect. -------------> wrapper ---------------->license server
configure wrapper script in lsf.licensescheduler file
blcollect calls the wrapper instead of calling lmstat/lmutl directly.
Input of wrapper script is port@server, features list, timeout, and location of lmstat/lmutil,
Output is the combined features information which is similar with lmstat -a
Wrapper script call license server per specified feature, and combines all the output before returning.
It's not something the development team have bandwidth for at present, but it could be implemented via Expert Labs.