IBM Data and AI Ideas Portal for Customers

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Post your ideas

Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea

Help IBM prioritize your ideas and requests

The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.

Receive notification on the decision

Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.

Additional Information

To view our roadmaps:

Reminder: This is not the place to submit defects or support needs, please use normal support channel for these cases

IBM Employees:

The correct URL for entering your ideas is:

Status Not under consideration
Workspace Db2
Components Serviceability
Created by Guest
Created on Feb 9, 2015

Optimizer should account for bufferpool hit-ratio when estimated page access time.(RIMS)

In PMR 00962,7TD,000, Goldman Sachs had a scenario where optimizer was choosing a more expensive HSJN over a less expensive NLJN. Shu Lin summaries the problem well: 'In terms of CPU cost and number of IOs, we correctly estimate that NLJN is better. We end up not favoring NLJN because we estimate NLJN requires a lot more seeks than HSJN. We prefetch NLJN outer (table scan) but not inner (ixscan-fetch) because we estimate the access patten on inner is random: the index has low clustering ratio. That is, for every inner page accessed, we estimate a seek. For HSJN, we prefetch both the probe side (list prefetch) and the build side (table scan). The number of seeks is (the number of page accessed) / (block_size). block_size is calculated based on prefetch size and extent size. As such, NLJN requires more seeks than HSJN, which drives up its total cost, and make it not favorable. In reality, the tables are cached in bufferpool, these seeks don't happen. That is why NLJN runs faster, which is consistent with our estimation (recall that in terms of CPU cost and number of IOs, NLJN is better). Shu Lin also recommend a method to get the optimizer to not estimate seeks: 'Regarding if there is a switch to get optimizer to not estimate seeks... We can alter tablespace overhead to some low value. Since we compute seeking cost based on overhead, a smaller overhead reduces our seeking cost estimation. However, if overhead is set too low, it can cause optimizer to pick a really bad plan if the pages indeed need to be read from disks. If you want to adjust overhead setting, it is recommended you test it out on your test system in a controlled manner. e.g. move selected tables to a dedicated tablespace, and alter overhead of that tablespace.' However, since Goldman deploys hundreds of instances, and queries are written by tens of thousands of app developers, performing any kind of hand-tuning of queries is always difficult. Goldman DBAs often push for DB2 deployments over other vendors because the DB2 optimizer requires less optimizer-hints versus other vendors, so reducing the total cost of ownership in this regard is a strategic advantage for DB2. We'd like to propose that Optimizer take into account the bufferpool hit ratio. Ideally automatically, but a config/registry option might also make sense if it is granular enough. This is the basis of the new requirement.