Skip to Main Content
IBM Data and AI Ideas Portal for Customers


This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:


Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,


Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at https://ideas.ibm.com


ADD A NEW IDEA

Connectivity

Showing 74

timeout value is set to 5min by default within the redshift-connector.

We are encountering some issues when we try to load the data from DB2>Redshift. The data from DB2 extracted to a file in AVRO format -> Uploaded in S3 -> Calls a COPY command on the Redshift Cluster. The problem is, COPY does execute howe...
over 1 year ago in Connectivity 0 Delivered

HA support for Kerberos and Hive

config can't point to different/dynamic servers to support HA-ness.
over 1 year ago in Connectivity 0 Delivered

Generic to Generic S3 Storage

Clients have storage array that support internal S3 Object Storage Bucket type connection. These can be from various vendors. Such connections will be using generic S3 protocol to establish connectivity to Storage Bucket. However, currently CPD on...
over 2 years ago in Connectivity 1 Delivered

Enhance MERGE operation on BQ Connector

Walmart is adding this as a placeholder for the discussions with IBM concerning the enhancement of the BQ connector MERGE operations per long term support.
almost 3 years ago in Connectivity 0 Delivered

BQ Connector enhancement to perform retry logic within connector

Within the BQ connector, if an error occurs on the connector while connecting to BQ, automatic retry would occur should an error be encountered.
almost 3 years ago in Connectivity 0 Delivered

Logging the Google BQ job id in IBM DataStage logs for unique search-ability based on BQ job id

When diagnosing DataStage job failures, especially on the BQ target/source side, it is difficult to trace the problem cause due to the lack of the JobID especially on busy environments. This would be for both the BQ Connector as well as the Simba ...
almost 3 years ago in Connectivity 0 Delivered

Extending temp file creation in BQ to include microseconds

The file names are not having the full microseconds. it is stopping with only 3 milliseconds and all the files are getting created with 4 zeroes at the end. node 4 and node 21 tried to create the file with the same name bigQueryTempFile20210415133...
about 3 years ago in Connectivity 0 Delivered

BQ connector parallelism solution for export job flow

The BQ connector has issues with exporting data from BQ to DataStage on-prem solution in a parallel mode. Would like it to work in parallel
about 3 years ago in Connectivity 1 Delivered

Increase data block size when writing to BQ/Google Cloud

Maximize your single stream performance by uploading >16MB objects if you can control the object size, as there is a significant knee of the curve as the object size increases from 2 to 16 MB. Doing so will very likely also decrease your Cloud ...
about 3 years ago in Connectivity 0 Delivered

BQ connector to support Before sql/After sql(Fail on Error(Yes/No))

we want to control if the job should fail or not with before / after sql statement , just like how we handle in Teradata connector .
about 3 years ago in Connectivity 1 Delivered