This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
IBM Employees should enter Ideas at https://ideas.ibm.com
See this idea on ideas.ibm.com
I have a major call center use case that requires text transcription of 2 distinct voices, a call center rep. and a customer. I need to group the transcribed text by each of these 2 people. The speaker_labels feature returns a list of time ranges, identifying a word in each time range as belonging to one particular speaker. To aggregate the words into sentences and paragraphs, the service consumer must pick words out by timestamp and reconstruct the text for each caller. This is a very clumsy and error-prone task for the consumer. The service should provide blocks of text by speaker, thus eliminating this burden on the caller. This could happen in a couple of different ways:
1) label each set of words from a specific speaker, sort of like reading a movie script OR
2) list all the text for one speaker, then all the text for the next speaker, etc.
Either 1) or 2) would be an improvement over the way speaker_labels output is currently provided.
Since speaker_labels is still in beta, this would be an opportune time in the lifecycle of that feature to implement this improvement. I am more than willing to participate in the testing of an improved speaker_labels feature.
Thanks you,
Mark Discenza
IBM Watson Health Implementations
By clicking the "Post Comment" or "Submit Idea" button, you are agreeing to the IBM Ideas Portal Terms of Use.
Do not place IBM confidential, company confidential, or personal information into any field.
Check this article as a workaround : https://stackoverflow.com/questions/50900340/speech-to-text-map-speaker-label-to-corresponding-transcript-in-json-response