IBM Data and AI Ideas Portal for Customers


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea

Help IBM prioritize your ideas and requests

The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.

Receive notification on the decision

Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.

Additional Information

To view our roadmaps: http://ibm.biz/Data-and-AI-Roadmaps

Reminder: This is not the place to submit defects or support needs, please use normal support channel for these cases

IBM Employees:

The correct URL for entering your ideas is: https://hybridcloudunit-internal.ideas.aha.io


ADD A NEW IDEA

FILTER BY CATEGORY

Speech Services

Showing 19 of 12278

Ability to move fully trained STT models between separate instances (i.e. DEV vs. PROD or separate accounts)

Customers for best practice reasons keep environments separate (DEV vs. PROD) for security, performance, testing & availability of the model. Training STT models can take hours to train. Not utilizing fully trained models results in delay...
about 3 years ago in Speech Services 4 Not under consideration

Improve API response structure for timestamps and word_confidence within STT SpeechRecognitionAlternative model

As part of the response from making a POST to the v1/recognize endpoint in the Speech to Text service, the user receives an array of "alternatives". Within these "alternatives" objects, there are two arrays called "word_confidence" and "transcrip...
over 3 years ago in Speech Services 0 Not under consideration

Requesting for IBM Bluemix Watson Text-to-speech supports Mandarin and Cantonese

IBM Bluemix Watson Text-to-speech service supports Mandarin and Cantonese. IBM Bluemix Watson Speech-to-Text support Mandarin, it's logical that Text-to-speech support Mandarin. Meanwhile, there is a niche market on STT/TTS for Cantonese, which ...
over 4 years ago in Speech Services 3 Functionality already exists

Punctuation Needed for IBM Speech to Text Service

I have a customer that is a daily magazine on the web as well as a podcast network. They offer analysis and commentary about politics, news, business, technology, and culture. They are considering our IBM Speech to Text (STT) solution to provide a...
11 months ago in Speech Services 0 Planned for future release

Phoneme timings in Text to Speech service

We'd like to use the text to speech service to control an animatronic. The animatronic has a mouth and needs to manipulate its lips and jaws as it's speaking and Amazon had phoneme and viseme support which is what we were using. However, we're swi...
over 2 years ago in Speech Services 2 Planned for future release

Add m4a speech to text support

This is a great format for content creation apps in iOS, why not support it?I have to get around by converting to wav or mp3 adding more waiting time
over 2 years ago in Speech Services 0 Not under consideration

Utterance segmentation made sensitive to speaker identity (features)

Utterance segmentation appears to be entirely independent of (the features used for) speaker labeling. Specifically, it was noticed that even though the speaker labeling correctly identifies that a new speaker (very clear because it goes from mal...
over 2 years ago in Speech Services 1 Functionality already exists

Automatic Voice Model detection

Automatically detect the voice model to improve transcription in use cases where multiple speakers have different accents (e.g. US and UK on the same line) - similar to language detection in Watson Assistant.
over 3 years ago in Speech Services 1 Not under consideration

Provide phoneme and word times for TTS

Useful for highlighting words as they play or driving avatar speech
about 5 years ago in Speech Services 0 Future consideration

Allow metadata to be saved with custom language models

We need to create and manage custom language models for a potentially large number of different clients.We would like to be able to save metadata with a custom language model that would help us to identify things like what client account the custo...
5 months ago in Speech Services 0 Planned for future release