We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Help IBM prioritize your ideas and requests
The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.
Receive notification on the decision
Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.
upgrade the Bulk Load stage in order to use TLS 1.2 ( server jobs )
I manage a DWH environment on which about 1000 server jobs are run every day that perform a data load using the MS Sql Server Bulk Load Datastage objects ( Datastage version 11.7 ). Those Datastage objects have been developed to use an embedded co...
Ajouter la gestion de l'avro dans le composant kafka connector - Add avro management in kafka connector component
Bonjour Le Connector Kafka ne supporte les schémas Avro qu'avec des types simple, pas de "Record", "Array", "Enum" ou "Map". Il faut penser à ajouter la gestion des "Record", "Array", "Enum" ou "Map". impossible de travailler sans ça HelloThe Kafk...
For all our ETL transformations across NYL, we use custom SQL to extract the data.
Workaround in POC:
We are using JDBC connector to read data from Redshift Database
Using native redshift connector wi...
Support for functionality to retrieve job logs via REST API
Currently the DataStage Flow Designer REST API does not have a function to retrieve logs upon job completion. We would like to see such functionality added, similar to the DSJOB -logsum and -logdetail option. Thank you!
The current Snowflake connector does not support inserting only new rows using a merge statement if there are only key columns and no attributes. For example, if you want to have every possible unique combination of 2 columns, there is currently n...
Make encrypted connection to access server with chcclp without reading tls.properties
We have an environment where encrypted communication is mandatory. When users want to execute chcclp scripts on the access server, they need to read the tls.properties file of the access server to find the location of the trust store. But in the t...
Please add Hierarchical Stage Assembly Editor in DataStage Designer without Adobe Flash
Currently the Hierarchical Stage Assembly Editor in DataStage Designer requires Adobe Flash. The new Data Flow Designer by no means matches the capabilities and richness of DataStage Designer. Please provide the capability of editing the Assembly ...
almost 2 years ago
Not under consideration
Datastage 18.104.22.168 SP3 automatic management of kerberos tickets from yarn client
Hello Team, the customer need that the yarn client of datastage 22.214.171.124 sp3 on linux renew the kerberos ticket in total autonomy upon its expiry without creating any disruption due to the destruction of the ticket and its recreation. Thanks, Nicolò.
Request to implement Auto Map functionality on Output in Hierarchical Stage
I'm using the Hierarchical Data stage to pull data via a REST API and then feed that through a JSON_Parser stage and finally map it to the output columns on the Output tab. I know in the DataStage thick client there was an option to use the Auto M...
Do not place IBM confidential, company confidential, or personal information into any field.