Skip to Main Content
IBM Data and AI Ideas Portal for Customers

This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea

Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal ( - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal ( - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM. - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

IBM Employees should enter Ideas at

Status Planned for future release
Created by Guest
Created on Apr 2, 2020

Rollback changes to both accelerators if failure occurs loading multiple accels in one load statement

We have a situation where it is important to keep the data in synch on both accelerators, so we use one statement to load both accelerators.

Real case: The data is loaded from OPERLOG logstream to shadow table every 15 minutes on two IDAA accelerators. To avoid loading duplicate data, we only load the newest rows that have been written to the logstream since the last time the load ran. To ensure this, we select the max timestamp from table in IDAA and then load the rows from the logstream that are greater than the max timestamp into IDAA. This works well unless the load to one accelerator fails and the other is successful. Then the max timestamps aren't the same on both accelerators and there is potential for either a gap or duplicates to occur next time the loads run, since we aren't guaranteed which accelerator the job will retrieve the max timestamp from.

Needed by Date Sep 1, 2020
  • Admin
    Chris Pomasl
    Dec 15, 2023

    This request is being planned with a tentative target of 1Q2024