This portal is to open public enhancement requests against products and services offered by the IBM Data & AI organization. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
Shape the future of IBM!
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Search existing ideas
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updateson them if they matter to you. If you can't find what you are looking for,
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Specific links you will want to bookmark for future use
Request to easily integrate custom scripts into the managed, SaaS DataStage Runtime
Our company is using Cloud Pak for Data as a managed service, which means we don't have direct access to the OpenShift platform or any of the pods or servers that run the Cloud Pak services, like the DataStage server.
Occasionally with on-prem DataStage we'll build custom scripts in Python or bash that do pre or post processing on data in order to more effectively ingest or, or do do the actual initial ingestion itself (like when using external API's).
To give an example, in the past I have written scripts that invoke external API's like Google DoubleClick to pull the data from an external source via the Execute Command stage in a Sequence job, land it to a temporary file on the DataStage ETL server, and then use those files as input to the normal ingestion process in DataStage.
The current process to get a script to the cloud is to send it over to our IBM support contact and have them upload it for us, but the problem with this approach is we have no real way of testing how it will work in the target system without sending it over so:
IBM support uploads
We try to run it, see the error(s)
Tweak it again
Send it over again, and the process repeats
Some things we can test locally and might just require different filepaths to get it working on that environment, but not being able to just tweak it in the actual environment is a bit like flying blind.
IBM mentioned they have this feature documented, but no roadmap for getting it implemented:
Do not place IBM confidential, company confidential, or personal information into any field.