This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updateson them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
Allow custom hardware specifications for Online Deployments such as for a Shiny app
We have users that have a need for different hardware configurations when deploying a RShiny application in their deployment space. They need high amount of memory but only need 2 cpu. We dont want them to deploy the application with a high amount...
Prevent user from creating custom environments with the same name.
Currently you can create a custom environment that has the exact same name as another. On the environment page you can differentiate between two identically named environment definitions by using the hardware configuration, language, or last modif...
Various machine learning backends benefit greatly by running with GPU support. We can already run Keras with a TensorFlow backend on DSx but it's CPU only which makes it unusably slow for many situations. It would be great to allow DSx to take adv...
Often clients develop their own python packaes & libraries Not every client has a binary repository manager such as artifactory Clients can add their own libraries in WML using package extensions Currently those can only be created using the w...
Python functions can run in a online deployment but as a data scientist i can't see te container/ pod logging of this function. The logging of the runs of the containers can be viewed by looking through administration in the logging of the pods (P...
When collecting information about current train runs, it happens that tasks are "running" but actually are still waiting for a GPU (see Support Ticket CS2171712). It is hard for a developer to know at which state the task actually is (running or w...
Allow for project variables which can be used (among other things) for security.
Security flaw to allow login data in a notebook. i.e.
We should allow for project variables ...
A large CPD customer from Taiwan have about 2000 GPU nodes using Slurm to running Kubernetes for AI jobs now, They plan to build CP4D on another stand alone cluster. They want CP4D utilize the GPU cluster but no need brake out existing mechanism.
Classification models can usually predict the probabilities per class. One probability number per known class. The model metadata in WML does not include the list of class labels seen in training. This makes it difficult for a scoring application ...
Do not place IBM confidential, company confidential, or personal information into any field.