We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Help IBM prioritize your ideas and requests
The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.
Receive notification on the decision
Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.
We need to write files into our company amazon s3 bucket with Datastage 126.96.36.199. We cannot tune the S3 Connector component with our specific endpoint, as we could using "endpoint" parameter with aws cli.
A shared access signature (SAS) provides a way to grant limited access to objects in the azure storage account to other clients, without exposing the account key.SAS keys are standard practice and with IS v.11.7 not being possible to use it, the i...
almost 4 years ago
Functionality already exists
Allow File Connector to manage Hive tables properly when they are located in HDFS Transparent Data Encryption Zones
When the File Connector is configured to write files to an encrypted zone (TDE) within HDFS, with "Create Hive Table" set to Yes and "Drop existing table" set to Yes, jobs will fail if the table already exists. This is because Hive requires the PU...
DataStage - Azure [Datalake] Storage Connector - Support parallelism parquet format
From PMR it has been confirmed that: Parallel Read/write - CSV and Delimited only support parallel write operations, rest of all file formats do not support parallel read/write. Parquet format doesn't support parallel read/write It is not document...
Using Infosphere Information Server 11.7.1. Service Pack 2, we noticed that we are unable to create a data connection for Amazon S3 in DataStage to a private endpoint (not the usual public Amazon endpoints). We are unable to read/write information...
Salesforce select only fields for the object. Do not automatically bring back all fields in the child relationships.
This is about the DataStage Salesforce Connector. When building the Salesforce Connector Stage by importing the objects fields from Salesforce using the "Browse Objects" function . A list of tables in Salesforce will be presented. One can click th...
DataStage supporting writing data to GCP GCS with CMEK
DataStage (11.7.1) Google Cloud Storage (GCS) connector failed to write data into GCS, not supporting Customer Managed Encryption Key (CMEK). We in HSBC use CMEK in all components in GCP. If this is enabled, we can use GCS with DataStage directly....
Do not place IBM confidential, company confidential, or personal information into any field.