We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Post your ideas
Post ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,
Post an idea
Upvote ideas that matter most to you
Get feedback from the IBM team to refine your idea
Help IBM prioritize your ideas and requests
The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The product management team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.
Receive notification on the decision
Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.
While scheduling the disposition sweep, ICN does not allow to schedule the sweep on Quarterly basis. It only provides us the option to schedule the sweep hourly, Daily, Monthly and yearly.
While scheduling the disposition sweep, ICN does not allow to schedule the sweep on Quarterly basis. It only provides us the option to schedule the sweep hourly, Daily, Monthly and yearly. Customer heavily relies on scheduling the sweep on Quarter...
Allow displaying spark CPU and memory values on Monitoring page
Currently Spark runtimes do not register the pods on the Spark side with the runtimes framework. So, Monitoring page show only 1vCPU and 1G RAM regardless of the size of the Spark environment choosen is only the proxy pod on the Jupyter Client sid...
The Spark job should use the mount_path attribute of the storage volume by default
The volumes.mount_path attribute in the Spark Jobs API is mandatory, https://www.ibm.com/docs/en/cloud-paks/cp-data/4.0?topic=jobs-spark-api-syntax-parameters-return-codes This should be an optional attribute to make it consistent with the usage o...
Python packages can currently be added using the approach described in https://www.ibm.com/docs/en/cloud-paks/cp-data/3.5.0?topic=packages-customizing-using-user-home-volume However, some Python packages use shared libraries that may not be instal...
Spark job will be fail when resource quota not enough
There are lots of Spark Job to run, if a job start and the resource quota is reach maximum, this job will fail. You will wait for job result to know is quota is not enough or you need to check quota every time when you submit a job. For AlibabaClo...
This is the research of my own related to number theory. In general: A whole number, odd or even, can be analyzed as the sum of a prime number and two Fibonacci numbers. Here we have some examples 52362 = 52127 (prime) + 2 (Fibonacci 1) + 233 (Fib...
Do not place IBM confidential, company confidential, or personal information into any field.