This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updateson them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
One of the frequent questions I receive from application teams, when dealing with XML data is :
What is the max size of my xml documents in database ?
What is the average size of them ?
Today, Db2 has no easy answer for that.. the LENGTH() function does not support XML types, and you are forced to serialize it to a CLOB type in order to get it's length()/size.
This idea is to improve this gap, by making the LENGTH() function to work against XML data types.
Instead of serializing it, for each row, the serialized length could be saved in the XDA area, as meta-data information, together with the xml real content. Every time a xml column is updated or inserted, db2 could compute it's serialized length, and save this value in a meta-data section, together with the xml doc.
This should improve performance to retrieve such information in the future, and the length() function could grab that information, instead of having to compute the xml size at run time.
Today, is almost IMPOSSIBLE to retrieve that information for BIG tables. Db2 consumes a LOT of TEMPSPACE area to serialize the documents to CLOB types, in order to retrieve the max/avg values present in any given column for the entire table.
Just to give you an idea, Db2 consumed more than 20 GB of TEMPSPACE, in order to determine the max XML length for a table which had around 30 million rows. I support databases with xml columns on tables having more than 1 Billion rows, and we don't have enough space to compute their sizes.
This new approach would avoid the need to serialize it, improving the overall performance and remove completely the need to use large amount of TEMPSPACE just in order to determine xml doc sizes in behalf of customers requests.
Do not place IBM confidential, company confidential, or personal information into any field.