Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

watsonx.ai

Showing 198 of 15697

Asynchronous API and prioritisation rules vs. synchronous requests

Due to the high shortage on GPU and the very high prices of those pieces of hardware, clients wants to optimize genetive AI queries by: Maximising GPU utilisation over time Maintaining the best quality of service, in particular latency and token t...
1 day ago in watsonx.ai 0 Submitted

Allow to use the Prompt Tuning feature from the Tuning Studio with any model

Model evolution is very fast. The client wants to leverage the last models as soon as they are released. Being able to prompt tune them is key to clients since: - it is a quick win to improve prompt engineering - does not requires a lot a time and...
27 days ago in watsonx.ai 0 Submitted

Support for mixed MIG strategy in a single GPU node

Clients have heterogeneous GPU infrastructure. They are using A30 (un dev environment only), A100 and H100. They have GPU servers that have 8 x A100 GPU on it. These GPU servers are bare metal worker nodes on their Openshift clusters. Since they u...
27 days ago in watsonx.ai 1 Submitted
158 VOTE

Regarding Mistral Large, I would like the context size to be able to use up to the model size limit (128K) in order to implement RAG & Agentic applications.

There are the following limitations when using Mistral Large via watosonx.ai. Mistral Large model limit: 128K watsonx.ai limit: 32K https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx#mistral-large Currently,...
about 1 month ago in watsonx.ai 1

Custom Image Approach to Integrate Advanced IDEs such as PyCharm or Visual Studio Code (VSCode) in Watsonx/CP4D

In the Graphical User Interface (GUI) of Watsonx/CP4D, Python development is currently facilitated through Watson Studio Notebook and JupyterLab Integrated Development Environment (IDE). While these tools are sufficient for many users, more techni...
about 1 month ago in watsonx.ai 0 Submitted

Support VMware MIGs for foundational model serving

The documentation states that MIG is supported for foundational model serving in watsonx. However, this refers only to a scenario where MIG is enabled within Openshift, with Openshift itself receiving passthrough GPUs. Our current infrastructure d...
about 1 month ago in watsonx.ai 0 Submitted

Ability to configure a global setting, config, or spec that ensures HTTP logging (and formatting) is always applied to nginx deployments within watsonx.ai.

While troubleshooting an issue with not being able to navigate to watson.orchestrate from within the watson.ai GUI, we have enabled HTTP access logs within the default ingress controller pods in Openshift. Reviewing those logs we can see the ingre...
about 1 month ago in watsonx.ai 0 Submitted

Add ability to modify the default, or create custom, deployment configurations for pods that are created when a deployment is pushed from the watson.ai UI

Add the ability to modify the default, or create custom, deployment configuration specs for pods that are created when a deployment is pushed from the watson.ai UI For example, when troubleshooting issues with deployments, or assets that are deplo...
about 2 months ago in watsonx.ai 0 Submitted

Monitoring Data for Online Models - CP4D

Hello, we are requesting for info on online deployments. At the moment, we are able to get logs via nginx pods of the requests that come through a model, but more info would make a difference for customers. For example there could be a monitoring ...
2 months ago in watsonx.ai 0 Submitted

Top 2 South African Language Support in granite

While watsonX currently supports several international languages, Afrikaans and Zulu spoken in the South African context have unique linguistic attributes not fully supported yet. These languages are essential for wider adoption in corporate and g...
2 months ago in watsonx.ai 0 Under review