We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Help IBM prioritize your ideas and requests
The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.
Receive notification on the decision
Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.
Allow metadata to be saved with custom language models
We need to create and manage custom language models for a potentially large number of different clients. We would like to be able to save metadata with a custom language model that would help us to identify things like what client account the cust...
When we did the project "spaceships with opinions" the IBM Watson tts-demo site was a great help. So I feel I have a responsibility to tell you how I think the new site is failing. https://www.ibm.com/demos/live/tts-demo/self-service/home It works...
Currently I can submit a word or phrase with an audio file and receive back the time code of the word or phrase. We need the time codes of the periods, question marks, and exclamation marks. In other words, we want to upload a transcript with the ...
Utterance segmentation made sensitive to speaker identity (features)
Utterance segmentation appears to be entirely independent of (the features used for) speaker labeling. Specifically, it was noticed that even though the speaker labeling correctly identifies that a new speaker (very clear because it goes from male...
Add flag to instances to indicate if have customizations
When building a UI for Phone integration -- the reasons to select a particular STT or Voice instance over another are the plan and whether it has customizations available. At this time to, when a user selects a speech instance, you must do several...
Automatically detect the voice model to improve transcription in use cases where multiple speakers have different accents (e.g. US and UK on the same line) - similar to language detection in Watson Assistant.
Retreive sound file previously streamed to Watson Speech-to-Text
When working on improving our product, we'd like to capture field failures that could be used to train our custom speech-to-text model. We have a wake word (handled locally) and then we establish a connection to IBM Watson's Speech-to-Text service...
India is the 4th largest economy in the world with a population of 1.3 billion. Over 50% of the population speak & communicate in Hindi. While Watson is well regarded in the market, the lack of Hindi language support is hindering sales and whi...
Do not place IBM confidential, company confidential, or personal information into any field.