Skip to Main Content
IBM Data and AI Ideas Portal for Customers
Hide about this portal


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

StreamSets

Showing 74

Snowflake support in JDBC Query Consumer

There are many cases where customers are utilizing Snowflake as one of their data sinks. This increased utilization also creates situations where the customer will also need to pull data from Snowflake, utilizing a custom query which could join ta...
2 months ago in StreamSets 1 Not under consideration

Allow the Schema Generator to be able to use Complex Data Types for Parquet and Avro

The schema generator currently is only able to generate basic schema. For example a map with mixed data types is considered complex and will cause an error: { "answers":{ "field1": "somestring", "field2": 123 } } SCHEMA_GEN_0007 - Map '/answer...
2 months ago in StreamSets 0

Enable reading NUMBER(1) columns in New Oracle CDC stage as INTEGER rather than BYTE

There is an existing CI in the old system for this; adding it to the Ideas portal for customer visibility. StreamSets documentation explains that we convert Oracle's Number type to SDC data types based on precision and scale, converting NUMBER fie...
4 months ago in StreamSets 0 Under review

Support the isEmptyList EL function in the Stop Condition field in HTTP Client Processor

Attempting to use the isEmptyList EL function in the Stop Condition field of the HTTP Client results in the following error: com.streamsets.datacollector.util.PipelineException: PREVIEW_0003 - Encountered error while previewing : com.streamsets.pi...
about 1 month ago in StreamSets 0 Under review

Support LDAP when uid's do are not "full dn"

Add SCH On-Prem's LDAP synchronization support for the posixGroup object class. Currently does not have support for LDAP directories using posixGroup uid’s because they don’t include a member's full dn.
7 months ago in StreamSets 0 Not under consideration

Clearing INVITED status for Service Accounts in SCH Platform when serviceAccounts is enabled

SCH's serviceAccounts feature enables SCH Admin users to create SCH API credentials on behalf of other users, so that service accounts (with no SCH login access or could be outside of the customer's IdP) can obtain API credentials and use the REST...
about 2 months ago in StreamSets 1 Planned for future release

Support for SASL mechanism SCRAM-SHA-512 in Data Collector

This is useful since this mechanism is more secure than the ones we have officially suported.
about 2 months ago in StreamSets 0 Under review

Snowflake Bulk not able to read from read-only database due to the temporary file format.

When Snowflake Bulk stage tries to read from a read-only database, it fails with the message: "Could not create a temporary file format (SNOWFLAKE_47)" due to the lack of permissions to create the file format object in the source database. This is...
3 months ago in StreamSets 1 Under review

Provide native capability to write metrics to a a standard product eg: prometheus

Platform teams wants pipeline metrics to be available to users through standard platforms such as prometheus. While can be worked around and instrumented, it should be out-of the box
2 months ago in StreamSets 0 Under review

Provide out of box logging capability instead of having to resort to Groovy to write to log files

This would allow developers to instrument the pipeline and write key / custom logging to the log file. It could be implemented similar to how we allow creation of rules between stages
2 months ago in StreamSets 0 Under review