This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updateson them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
Feature enhancement to monitor object detection models.
Openscale allows us to monitor the quality of machine learning models for classification, regression, and time series. There is a valuable opportunity to extend its capabilities to include monitoring and explainability for object detection models.
To this end, the proposed enhancement should encompass the following key features:
Object Detection Model Monitoring: Extend the monitoring capabilities of Openscale to accommodate object detection models. This involves capturing and analyzing relevant metrics related to object detection, such as precision, recall, average precision, and Intersection over Union (IoU) Drift Detection for Object Detection Models: Incorporate the ability to detect drift over time in object detection models. This would entail tracking changes in model performance metrics as new data is processed (e.g., drop in IoU), as well as changes in data consistency when Compared to training data characteristics (e.g., drop in data consistency), ultimately enabling us to identify shifts in model behavior that might require investigation or intervention. Prediction Explanations for Object Detection: Integrate a model explainability feature (such as the already supported LIME/SHAP algorithms, the activation mapping strategy, etc) tailored to object detection predictions. This functionality should provide insights into why the model made specific object detection decisions, giving us the ability to understand the rationale behind its outputs.
With regards to model interface, object detection models (YOLO, Faster R-CNN, RetinaNet, etc), have outputs with bounding boxes, associated to each identified object, with possibly additional labels associated to each box (e.g., type of the detected object) and the associated probability of each label. The position of the bounding box and the corresponding label of each box are key for evaluating the model predictive capabilities.
Example output from a fasterrcnn_resnet50_fpn model applied to one image:
Do not place IBM confidential, company confidential, or personal information into any field.