We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Help IBM prioritize your ideas and requests
The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.
Receive notification on the decision
Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.
Need to ensure that cloud pak for data pulls the exact version from the IBM Operator Catalog and not the latest version
Currently while installing Datastage cartridge with terraform, only the latest version of the Datastage cartidge gets pulled from the IBM Operator Catalog. If an older version of is passed as parameter to Terraform, the install is failing. Need a ...
Actually, git integration does only work for projects including Notebooks, but not DataStage flows. It is even worse: when creating project with git integration, it is not longer possible to add DataStage flows AT ALL! Only projects w/o git integr...
Adding feature for more frequent commits when using DB2 connector
When loading big amount of data into DB2 for Warehousing, we are exceeding DB2 limitation 300GB which means that we need to load the data with more than just one commit. However when using external tables in DataStage, DB2 connector ingores the co...
CP4D - DataStage will be used in the data flows defined in our datalake using Postgres and Azure SQL. Because we are required to use the ODBC Connector for the connections we require the capability of doing Bulk Loads , exactly as this is offered ...
add/remove collaborators in Transformation Project
Transformation projects has the following problem, it is not possible to restrict access to different projects by users, all users with data enginering role can access all projects. The requested improvement lies in being able to add and remove co...
Make it more difficult to accidentally close windows in DataStage
When working in (for example) a Transformer, if you accidentally click outside of the pop-up window, the window will close like hitting the cancel button and all the work you'd done so far is gone. This can be extremely frustrating. On a similar n...
Replacement of Rest API calls for XMETA information with database connectors for much much faster retrieval
Hello, My team is trying to run a job in Data Stage on ING Datalakes and it takes around 10+ hours to complete.
Performance issue is generated by Rest API calls while retrieving data from XMETA, daily run being questionable due to this approach....
Do not place IBM confidential, company confidential, or personal information into any field.