This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
IBM Employees should enter Ideas at https://ideas.ibm.com
See this idea on ideas.ibm.com
Dear Team,
We have an Analytics Engine running in our PROD account. This IAE is key to our environment and holds the all the primary jobs being run.
Our ask is how can we collect below list of Spark / Yarn metrics, with the help of monitoring tool Sysdig??
Spark metrics -
spark.job.count |
Number of jobs |
spark.job.num_tasks |
Number of tasks in the application |
spark.job.num_active_tasks |
Number of active tasks in the application |
spark.job.num_skipped_tasks |
Number of skipped tasks in the application |
spark.job.num_failed_tasks |
Number of failed tasks in the application |
spark.job.num_completed_tasks |
Number of completed tasks in the application |
Yarn cluster metrics
Name |
Description |
|
unhealthyNodes |
Number of unhealthy nodes |
Resource: Error |
activeNodes |
Number of currently active nodes |
Resource: Availability |
lostNodes |
Number of lost nodes |
Resource: Error |
appsFailed |
Number of failed applications |
Work: Error |
totalMB/allocatedMB |
Total amount of memory/amount of memory allocated |
Resource: Utilization |
Regards,
Vishvesh Jain
SRE @ Truata
By clicking the "Post Comment" or "Submit Idea" button, you are agreeing to the IBM Ideas Portal Terms of Use.
Do not place IBM confidential, company confidential, or personal information into any field.
Hi Vishvesh
The current IAE plans based on HDP, integrate with New Relic for internal monitoring and metrics. We do not have plans to integrate the current HDP based packages with Sysdig. The new serverless Spark plan that we are working on will integrate with Sysdig. Tentative GA in Frankfurt is Q1'20
Hi Team,
Hope you guys are doing well!
Touching base on this one to see if there's a line of sight by when can we expect this feature provisioned in IBM Cloud
Regards,
Vishvesh Jain
SRE @ Truata