How do we reduce storage cost and improve performance by cleaning BPM tables?

Hello there,

I am working with a Sugar Cloud customer on version 13.0 and they are asking what is the official way to cleanup all BPM SQL tables for old/orphan records.

They already have a cleanup process for the pmse_inbox SQL table through the application tool.

How can we efficiently, effectively and safely clean up SQL tables such as pmse_inbox, pmse_bpm_flow and pmse_bpm_form_action, on an ongoing basis, and also once-off?

Only on the SQL table pmse_bpm_form_action, the customer has about 3M records.

Are there existing best practices, or knowledge bases or tutorials on how to achieve this performance and cost-saving exercise correctly?

We also would like to not have this timeout the schedulers and therefore fail mid-way, especially for the initial cleanups.

Thank you for your help!

Parents
  • Hi Enrico,

    We had a serious problem with the growth of the database, and I've found the Data Archive function helped to reduce the size. 

      

    Bud Hartley | Cape Foulwind, NZ (and Oregon, USA)

  • That helps if you can hard delete the records.

    The archiver has two options: move to an archive table (does nothing for DB size) or hard delete.

    It would be great if there was an option to archive to an off-site database so that customers had the option to have a data warehouse of older information off of their main Sugar DB.

    Of course this is less of an issue for those of us On-Site since disk space can be more easily managed when you have full control of the database.

    Francesca

  • Hey!

    I believe you *could*, in theory, do some of that. Aside from deleting the archived table from Sugar, which you would need Sugar to do.

    It is not for the faint of heart and it requires lots of scripting and a server doing the transformation for the "integration".

    The archived table is a new table (*_archive), you could then request via API a Sugar backup, download it, extract its content programmatically and, then load the SQL dump into a temporary database, which then pushes the _archive tables to the final offsite archival destination.

    But then you would need someone to delete the archived table for you...

    --

    Enrico Simonetti

    Sugar veteran (from 2007)

    www.naonis.tech


    Feel free to reach out for consulting regarding:

    • API Integration and Automation Services
    • Sugar Architecture
    • Sugar Performance Optimisation
    • Sugar Consulting, Best Practices and Technical Training
    • AWS and Sugar Technical Help
    • CTO-as-a-service
    • Solutions-as-a-service
    • and more!

    All active SugarCRM certifications

    Actively working remotely with customers based in APAC and in the United States

Reply
  • Hey!

    I believe you *could*, in theory, do some of that. Aside from deleting the archived table from Sugar, which you would need Sugar to do.

    It is not for the faint of heart and it requires lots of scripting and a server doing the transformation for the "integration".

    The archived table is a new table (*_archive), you could then request via API a Sugar backup, download it, extract its content programmatically and, then load the SQL dump into a temporary database, which then pushes the _archive tables to the final offsite archival destination.

    But then you would need someone to delete the archived table for you...

    --

    Enrico Simonetti

    Sugar veteran (from 2007)

    www.naonis.tech


    Feel free to reach out for consulting regarding:

    • API Integration and Automation Services
    • Sugar Architecture
    • Sugar Performance Optimisation
    • Sugar Consulting, Best Practices and Technical Training
    • AWS and Sugar Technical Help
    • CTO-as-a-service
    • Solutions-as-a-service
    • and more!

    All active SugarCRM certifications

    Actively working remotely with customers based in APAC and in the United States

Children
No Data