This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updateson them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
DataStage - Azure [Datalake] Storage Connector - Support parallelism parquet format
From PMR it has been confirmed that: Parallel Read/write - CSV and Delimited only support parallel write operations, rest of all file formats do not support parallel read/write. Parquet format doesn't support parallel read/write It is not document...
Teradata connector restricts max Response Row size to 64K bytes only
We have data sources where column field lengths are in excess of varchar(3000), using the teradata connector with immediate mode, job fails with error "RDBMS code 9990: Response Row size exceeds 64K bytes and is incompatible with the Client softwa...
We need to write files into our company amazon s3 bucket with Datastage 184.108.40.206. We cannot tune the S3 Connector component with our specific endpoint, as we could using "endpoint" parameter with aws cli.
USG customer uses Apache Accumulo as one of their main data repositories. Need CP4D to be able to use Accumulo as a source and target data source for Information Governance, Data Science, Virtualization, and AI functionality (All of CP4D)
Using Infosphere Information Server 11.7.1. Service Pack 2, we noticed that we are unable to create a data connection for Amazon S3 in DataStage to a private endpoint (not the usual public Amazon endpoints). We are unable to read/write information...
Salesforce select only fields for the object. Do not automatically bring back all fields in the child relationships.
This is about the DataStage Salesforce Connector. When building the Salesforce Connector Stage by importing the objects fields from Salesforce using the "Browse Objects" function . A list of tables in Salesforce will be presented. One can click th...
We would like to connect an Amazon S3 connector to crawl our MFA-enabled S3 instance to pull in data to use in the Watson CP4D environment, but the current Amazon S3 connector does not support connecting to MFA-enabled environments.
DataStage supporting writing data to GCP GCS with CMEK
DataStage (11.7.1) Google Cloud Storage (GCS) connector failed to write data into GCS, not supporting Customer Managed Encryption Key (CMEK). We in HSBC use CMEK in all components in GCP. If this is enabled, we can use GCS with DataStage directly....
Temporal table and Time Travel Queries are a powerful functionality which is not yet supported in Db2 although it has been introduced many years ago. The Join stage needs to be extended for Business-Time otions like "FOR BUSINESS_TIME FROM '1990-0...
Do not place IBM confidential, company confidential, or personal information into any field.