How can I share the code in 3 servers?

Hello!

The problem that I have right now is an Infrastructure level;
I'm hosting on AWS a SugarCRM application and I have to replicate the code in 3 servers of an Autoscaling group on Amazon web services, 
What I'm using is EFS (elastic file system) or S3 (simple storage service)

In my first test, I tried with the full project (about 1.2 Gb of size) and apache was reading really slow the code.
In my second test, I tried with only the custom folder, and It works good but when I have to launch the quick repair, it shows errors related with the Mysql (that is an RDS).

at the moment the only solution that I have found to replicate the code in that autoscaling group (3 servers), is to mount an S3 bucket in another location and replicate the code in "local", from the route of the S3 bucket to the custom folder.

What I need is a solution or recommendation (or know it is possible to do this kind of process).

Regards!

Parents
  • In the end, we run with several settings using AWS EFS, providing it with an extra 30Mb/s. Content cache settings for the webserver

    Coping the data to the EFS volume without any config it took this times:
    Files size: du -sh PROJECT/
    348M PROJECT/

    rsync ===
    Server Start Time: Tue Nov ## 23:08:06 UTC 2020
    Server End Time: Tue Nov ## 23:30:40 UTC 2020

    Apache mods:
    a2enmod cache
    a2enmod file_cache
    a2enmod cache_disk

    /etc/apache2/mods-available/cache_disk.conf

    <IfModule mod_cache_disk.c>
    CacheRoot /var/cache/apache2/mod_cache_disk
    CacheEnable disk /
    CacheDirLevels 2
    CacheDirLength 1
    </IfModule>

    opcache:
    opcache.enable=1
    opcache.enable_cli=1
    opcache.memory_consumption=1024
    opcache.interned_strings_buffer=32
    opcache.max_accelerated_files=4000
    opcache.fast_shutdown=0
    opcache.max_file_size=0
    opcache.revalidate_freq=0
    opcache.file_cache=/var/www/YOUR_PATH/

    Also in config_override we use Elasticache (Redis) to save the cache/ folder and PHPSessions(on php configuration).

    $sugar_config['cache']['backend'] = 'Sugarcrm\\Sugarcrm\\Cache\\Backend\\Redis';
    $sugar_config['external_cache_force_backend'] = 'redis';
    $sugar_config['external_cache']['redis']['host'] = 'redis.##.cache.amazonaws.com';
    $sugar_config['external_cache']['redis']['port'] = '6379';


    session.save_handler = rediscluster
    session.save_path = seed[]=cluster.redis.cache.amazonaws.com:6379

    It is still a little bit slow, but it makes a difference with these settings, now it is usable

  • Hi ,

    The problem is in general the low throughput for small files provided by EFS. We still find it much more performant to "roll your own" redundant NFS service.

    You can test that out with a CLI script to verify throughput (both read and write) by doing small php file operations (similar to what Sugar would do during a repair) with a tool called "toothpaste". You should run the script from the server connecting to EFS, using the same instance's codebase.

    I specifically built this tool to help me immediately determine disk/network throughput problems for on-site Sugar installations and it has been really useful over the years.

    The tool can be found/installed with composer by following these instructions: https://packagist.org/packages/esimonetti/toothpaste 

    The specific command you are looking for in this case is:

    ./vendor/bin/toothpaste local:analysis:fsbenchmark --instance /path/to/instance

    The output of the command will look something like this:

    ./vendor/bin/toothpaste local:analysis:fsbenchmark --instance ./sugar/SugarEnt-Full-10.3.0/
    Toothpaste v0.2.1
    Executing benchmark on the file system...
    Entering ./sugar/SugarEnt-Full-10.3.0/...
    Performing file system reading benchmark through PHP
    The Sugar system contains 21,650 PHP files and the script will test 43,300 files
    ...........................................
    File system reading benchmark through PHP completed
    Processed 43,300 files. Loaded their content of 289,094.87 KB. Read speed benchmark completed in xxx seconds.
    
    Read speed: xxxxxx KB/s
    
    Indicative comparison data:
    Excellent - Above 100,000 KB/s
    Good - Between 20,000 KB/s and 99,999 KB/s
    Minimum acceptable - Between 5,000 and 19,999 KB/s
    Needs attention - Less than 5,000 KB/s
    
    Performing file system writing benchmark through PHP
    Benchmarking file system write performance by writing and immediately deleting 43300 files.
    The benchmark process might take some time, please wait...
    ...........................................
    File system writing benchmark through PHP completed
    Processed 43,300 files. Loaded their content of 290,245.31 KB. Write speed benchmark completed in xxx seconds.
    
    Write speed: xxxxx KB/s
    
    Indicative comparison data:
    Excellent - Above 60,000 KB/s
    Good - Between 5,000 KB/s and 59,999 KB/s
    Minimum acceptable (Especially for NFS storage. If the infrastructure does not use NFS, it needs attention already) - Between 1,000 and 4,999 KB/s
    Needs attention - Less than 1,000 KB/s
    
    Execution completed in xxx seconds.
    

    This should help you to empirically measure your file system performance with php and the Sugar codebase.

    Hope it helps

    --

    Enrico Simonetti

    Sugar veteran (from 2007)

    www.naonis.tech


    Feel free to reach out for consulting regarding:

    • API Integration and Automation Services
    • Sugar Architecture
    • Sugar Performance Optimisation
    • Sugar Consulting, Best Practices and Technical Training
    • AWS and Sugar Technical Help
    • CTO-as-a-service
    • Solutions-as-a-service
    • and more!

    All active SugarCRM certifications

    Actively working remotely with customers based in APAC and in the United States

Reply
  • Hi ,

    The problem is in general the low throughput for small files provided by EFS. We still find it much more performant to "roll your own" redundant NFS service.

    You can test that out with a CLI script to verify throughput (both read and write) by doing small php file operations (similar to what Sugar would do during a repair) with a tool called "toothpaste". You should run the script from the server connecting to EFS, using the same instance's codebase.

    I specifically built this tool to help me immediately determine disk/network throughput problems for on-site Sugar installations and it has been really useful over the years.

    The tool can be found/installed with composer by following these instructions: https://packagist.org/packages/esimonetti/toothpaste 

    The specific command you are looking for in this case is:

    ./vendor/bin/toothpaste local:analysis:fsbenchmark --instance /path/to/instance

    The output of the command will look something like this:

    ./vendor/bin/toothpaste local:analysis:fsbenchmark --instance ./sugar/SugarEnt-Full-10.3.0/
    Toothpaste v0.2.1
    Executing benchmark on the file system...
    Entering ./sugar/SugarEnt-Full-10.3.0/...
    Performing file system reading benchmark through PHP
    The Sugar system contains 21,650 PHP files and the script will test 43,300 files
    ...........................................
    File system reading benchmark through PHP completed
    Processed 43,300 files. Loaded their content of 289,094.87 KB. Read speed benchmark completed in xxx seconds.
    
    Read speed: xxxxxx KB/s
    
    Indicative comparison data:
    Excellent - Above 100,000 KB/s
    Good - Between 20,000 KB/s and 99,999 KB/s
    Minimum acceptable - Between 5,000 and 19,999 KB/s
    Needs attention - Less than 5,000 KB/s
    
    Performing file system writing benchmark through PHP
    Benchmarking file system write performance by writing and immediately deleting 43300 files.
    The benchmark process might take some time, please wait...
    ...........................................
    File system writing benchmark through PHP completed
    Processed 43,300 files. Loaded their content of 290,245.31 KB. Write speed benchmark completed in xxx seconds.
    
    Write speed: xxxxx KB/s
    
    Indicative comparison data:
    Excellent - Above 60,000 KB/s
    Good - Between 5,000 KB/s and 59,999 KB/s
    Minimum acceptable (Especially for NFS storage. If the infrastructure does not use NFS, it needs attention already) - Between 1,000 and 4,999 KB/s
    Needs attention - Less than 1,000 KB/s
    
    Execution completed in xxx seconds.
    

    This should help you to empirically measure your file system performance with php and the Sugar codebase.

    Hope it helps

    --

    Enrico Simonetti

    Sugar veteran (from 2007)

    www.naonis.tech


    Feel free to reach out for consulting regarding:

    • API Integration and Automation Services
    • Sugar Architecture
    • Sugar Performance Optimisation
    • Sugar Consulting, Best Practices and Technical Training
    • AWS and Sugar Technical Help
    • CTO-as-a-service
    • Solutions-as-a-service
    • and more!

    All active SugarCRM certifications

    Actively working remotely with customers based in APAC and in the United States

Children
No Data