How do we reduce storage cost and improve performance by cleaning BPM tables?

Hello there,

I am working with a Sugar Cloud customer on version 13.0 and they are asking what is the official way to cleanup all BPM SQL tables for old/orphan records.

They already have a cleanup process for the pmse_inbox SQL table through the application tool.

How can we efficiently, effectively and safely clean up SQL tables such as pmse_inbox, pmse_bpm_flow and pmse_bpm_form_action, on an ongoing basis, and also once-off?

Only on the SQL table pmse_bpm_form_action, the customer has about 3M records.

Are there existing best practices, or knowledge bases or tutorials on how to achieve this performance and cost-saving exercise correctly?

We also would like to not have this timeout the schedulers and therefore fail mid-way, especially for the initial cleanups.

Thank you for your help!

Parents Reply
  • Wow!  I just looked and the option to select "pmse_BpmFlow" isn't available if I try to create a new Archive. I did this one about a year ago.

    FYI: Chris is correct that deleting a previous process that was scheduled to trigger on create or first update is lost if you delete the history.  I have a couple of processes that trigger that way, and the good news is that it wasn't a problem for me since those processes checked the presence of a value in a field that should have been populated when it triggered.

    Bud Hartley | Cape Foulwind, NZ (and Oregon, USA)

Children
No Data