We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Help IBM prioritize your ideas and requests
The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.
Receive notification on the decision
Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.
We are unable to grant access to all cases for a single system to all a team responsible for a product.You can either see your own tickets or all tickets for the ICN (which includes multiple products).We need a way to group people together so they...
Our customer wants WKC to be connected to DENODO with Generic JDBC
Our customer wants WKC to be connected to DENODO with Generic JDBC.Now CP4D can connect to DENODO with Generic JDBC in DV.It is described in this document.https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=overview-whats-new#whats-new__refre...
Teradata connector restricts max Response Row size to 64K bytes only
We have data sources where column field lengths are in excess of varchar(3000), using the teradata connector with immediate mode, job fails with error "RDBMS code 9990: Response Row size exceeds 64K bytes and is incompatible with the Client softwa...
Oracle connectors should support more flexible connect strings
The Oracle connector only allows configuring SID (or service)/port/hostname, whereas Oracle connect strings can normally be considerably more complex. This does not allow for connection manager configurations or even a simple failover pair of hos...
We need the possibility to extend the Apache Kafka Connector with custom functionality. We are using AVRO SerDe + Confluent Schema Registry but the message payload is encrypted using in-house libraries. When reading events, first we need to decryp...
Kafka Connector job failed when schema compatibility mode set to FULL
TS006561571We have IIS 18.104.22.168 with SP1 installed on RedHat 7. Users are using Kafka Connector stage to publish message with Avro schema. The job success when Kafka schema compatibility set to NONE. But the job failed with error "Schema being reg...
In DataStage, wildcards in the Azure Data Lake Storage connector are only allowed the filename, not in the filepath
In the Azure Data Lake Storage connector DataStage can read multiple files at once by specifying wildcards. But these wildcards are only allowed in the filename, not in the filepath. This means that all the files we want to read have to be present...
Do not place IBM confidential, company confidential, or personal information into any field.