Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



Status Submitted
Workspace watsonx.ai
Created by Guest
Created on Jan 28, 2026

Provide flexible input parameters for ai service deployment

One of the use cases at BMO is to evaluate their RAG-based chatbot with wastsonx governance. Some of their evaluation metrics are OOTB metrics offered by wx.gov and some are being implemented as custom metrics. The custom metrics evaluation are implemented as wx.ai python function batch deployment. The main pain point with the current solution built by EL is the performance bottleneck created by custom metric implementation with python function. wx.ai team's recommendation was to move away from python functions and use ai service as it is built with gevent + gunicorn +flask while the python functions inherently lacks true parallelism as they are implemented with thread + gunicorn + flask stack and is restricted by python GIL . We have hit a roadblock in our exploration to use ai service for our use case due to strict input payload format requirements of the ai service. 

The current flow of the solution :

1. wx.gov invokes the python function batch deployment rest endpoint to evaluate custom metrics with the input_data payload  ({ "input_data": [{"values":[payload]}]}).

2. Python function reads the records from the wx.gov DB, runs the metrics evals( llm prompt-based metric calculation through wx.ai api calls ) , persist the results to wx.gov db, returns the  response as predictions array ("predictions" : [{ "fields" : [], "values : []}]).

Proposed flow: 

1. wx.gov invokes the ai service deployment to evaluate custom metrics with the input payload  ({ "input_data": [{"values":[payload]}]}).

2. AI service reads the records from the wx.gov DB, runs the metrics evals( llm prompt-based metric calculation through wx.ai api calls ) , persist the results to wx.gov db, returns the  response.

Roadblock:

1. The AI service requires the input payload in a specific format (as input_data_references ) which is incompatible with the input_data payload from wx.gov. 

Our workaround to address this: 

As a workaround for this, idea is to create an additional python function deployment that will be invoked by wx.gov with the input_data payload. This additional function will parse the payload and make it AI service compatible, then create a job for the ai service deployment. 

Issue: But the issue is we are not able to send any additional/custom parameters in the input_data_references payload. Furthermore, in our use case, there is no data connection parameters involved. So the input_data_references is not relevant. We need more flexibility in the parameters that can be passed to the ai service job. 

 

Please refer to this github issue for more information on the discussion with wx.ai dev team - https://github.ibm.com/NGP-TWC/ml-planning/issues/58164

Needed By Month