Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



Status Planned for future release
Workspace watsonx.governance
Created by Guest
Created on May 21, 2025

Onboard custom real time detector on watsonx.ai+gov

A user should be able to configure a custom guardrail / real time detector on a Prompt Template (PTA) deployment. 

 

Context

A client does generative AI and enforce governance. As a consequence:

  • The only way to consume a model in Production is through a deployed Prompt Template (PTA)
  • Prompt Template is tracked in an AI use case
  • Prompt Template has been evaluated with multiple tests (OOB metrics from x.gov, custom metrics, LLM as a Judge, Human validation)
  • Monitoring is activated on the Prompt Template

 

Requirements

Now that the prompt template is in Production we want to add an additional line of defence by executing tests (aka. Guardrails / real-time detector) in real-time on the PTA input and/or output (priority is on output). The outcome of the test could be one of the following:

Blocking behavior

  • If the test if OK or with a score above a certain threshold then the output is sent back to the consuming application
  • If the test KO or below a certain threshold then the output is blocked and an error is sent back to the consuming application

Non-blocking behaviour --> priority

  • The ouput of the test is described in json format and appended to the body of the response (like PII/HAP moderation today). Then it is the responsability of the consuming application to decide what to do.

Since a guardrail can be code, regex, SLM, or LLM-based, the most simple way would be to be able to deploy guardrail as Python function with a standardized interface contract. Then, we should execute some configuration steps to apply the guardrail on a particular deployed PTA. The end to end flow to build and deploy a PTA would be as follows:

  1. Experiment on the PTA
  2. Create a new test to detect something on a PTA (e.g. build a model that detect if the outcome of the model respect the brand tone & voice)
  3. Use evaluation data to validate that the test is performant enough
  4. Deploy the test as Python function and make it a custom metrics
  5. Use this custom metric in the evaluation phase to validate PTA
  6. Track, evaluate again and deploy PTA
  7. Configure PTA deployment to have test executed in real time and test outcome included in the response body --> here the test used during evaluation phase becomes a guardrail
  8. Guardrail output must be logged in Openscale with payload data
Needed By Week
  • Admin
    Upasana Bhattacharya
    Jun 19, 2025

    @Guest Hi Thomas - one key question from this -- putting aside the solutioning and implementation in the idea above - what are the categories that Credit Mutuel has a need for real-time detection?

  • Guest
    May 26, 2025

    Hello, 
    - The solution design should allow to deploy the custom real time detector where .gov is deployed. The idea would be to centralise all real time detectors in x.gov.

    - Why to tthink through the whole PTA lifecycle. 1st step is to evaluate with an evaluation dataset. Evaluation results must be stored in Openscale/Openpages because it contribute to some sort of lineage about the PTA development. At some point, we may have to prove to the regulator that we perform sufficient testing before putting AI assets in production. The thinking here is that it is higly probable that a custom real time detector is also a test perform during evaluation phase. Note that ideally each test should be versionned...

    - what is unclear?