Skip to Main Content
IBM Data Platform Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data Platform organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com



ADD A NEW IDEA

Clear

StreamSets

Showing 142 of 18242

Subscriptions on job failure is firing during job failover between data collectors

For high availability usually a job is setup to run on a data collector label associated with more than one data collectors. If a job is failed over from one data collector to another due to some issues the subscription gets fired as a job failure...
about 1 year ago in StreamSets 1 Is a defect

Support pulling data from a Databricks source via a branded origin stage

Johnson & Johnson, a strategic account, is asking for a branded origin stage for Databricks. Currently the only way to ingest data from Databricks is via the JDBC consumer origins, but there are limitations that are causing them issues. Full c...
8 months ago in StreamSets 0 Under review

Add the ability to whitelist by IP by Organization to Control Hub

This ask is for Control Hub upon login of a particular organization, to check a configuration for that organization that the request is from a particular IP address. Ideally, each web request to Control Hub (or maybe even SDC) would run through a ...
8 months ago in StreamSets 1 Functionality already exists

Improve Snowflake Error Behavior When Merge Enabled for CDC Data

When moving CDC data to Snowflake using the MERGE statement, some records in a batch may occasionally be invalid. The "continue" error behavior configuration for the Snowflake stage does not apply for MERGE statements, and the entire batch fails, ...
over 1 year ago in StreamSets 0 Not under consideration

Mongo "processor" stage (for delete / insert etc) so we can guarantee order of operations

A mongo 'processing' stage that can do inserts / deletes that can be used in the middle of processing rather than only upon destination, effectively to guarantee the order in which this occurs. I'd like to handle an update scenario by first deleti...
8 months ago in StreamSets 0 Under review

Enhancing the SCH Alert Screen with Job and Topology Details

The current alert screen on the platform displays only 'Messages' and 'Triggered On' values. This idea is to include 'Job Name' and 'Topology Name' (and potentially other fields) directly on the screen. Currently, users must navigate through multi...
9 months ago in StreamSets 0

Implementing User Roles to Restrict Pipeline Usage Without Connection Objects

In stages where data is read from or written to, it is essential to define a connection, such as a database, storage, or HTTP server. This requirement can pose a security risk, as users might enter databases or locations that comply with firewall ...
9 months ago in StreamSets 0 Under review

Add Option to Truncate Table in Destination Stage Before Inserting Data

In many data transfer scenarios, it is essential to ensure that the destination table is empty before inserting a fresh set of data. Currently, this requires the creation of a separate pipeline to truncate the table, followed by an orchestration p...
9 months ago in StreamSets 1 Not under consideration

Support for having origin to read from Azure ADLS/Blob File Shares

Product - IBM StreamSets | Data Collector Customer have a use case where they need to read data from ADLS File Shares. It could be achieved through Groovy scripting using Groovy origin; However, having a dedicated stage for it would be much helpful.
9 months ago in StreamSets 0 Under review

Improve parameter visibility when using job templates

When creating a Job Instance from a Job Template which uses parameters, the created job instance does not recognize or display the parameters in some scenarios. This occurs in the following scenarios: When creating the job template, the default (a...
9 months ago in StreamSets 1 Planned for future release