This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updateson them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
Dask support on Conductor based on OpenCE conda packages for Power
Conductor 2.5.0 brings Dask to light. In order to use Dask, one needs to create a conda environment for it. Currently release of Conductor builds that conda environment leveraging WML-CE 1.7.0, a project which was sunset in 2020. It should be leve...
In the provided ticket (TS005406891) we are looking for the ability to configure the Nvidia Compute Mode via a Spectrum Conductor setting/config. This would make managing the setting simpler via a config change versus requiring each customer to im...
We are in immediate need of latest pyarrow version for ppc64le and we are looking for v2.0.Also we need a conda package ( NOT PIP) and will appreciate if this could be made available to us at the earliest. Having said this would like to follow up ...
Add slot demand as part of the spark application metric pushed out to elastic search
We would like to have slot demand included as part of the spark application metric that are currently pushed out to elastic search which would help us identify the slot demand vs provisioned for the spark job submitted by the users.
Requesting support for application level performance metrics for Dask jobs to be loaded into ELK. Currently, we get these metrics for spark jobs and would like to get this type of information for Dask jobs as well in 2.5.
We would like to use pytest framework to run test cases that contains python notebooks and essentially some of the notebooks use spark and h2o. We would like to understand if you are aware how to make this work using conductor.
Do not place IBM confidential, company confidential, or personal information into any field.