Redis/Memcache and file cache

In the advanced sugar configuration options, there is the ability to point "external cache" to redis or memcached. What is actually being redirected? This doesn't replace, or even seem to augment, the local file cache.

How is this intended to be handled when running in a horizontally scaling environment? Is the ./cache/ directory intended to be shared between every front end node, and if so, is it possible to actually put this into a store like redis or memcache - or to generally offload it entirely?

Or is it safe to have each web node maintain it's own local copy of the cache directory?

Our intended deployment leverages code-deploy on aws in autoscale groups behind ELBs. Right now we're trying to leverage EFS mounted to the ./cache/ dir on all web nodes, but the overhead of EFS transactions is egregious for this task.

Is there a better way to handle this?

Parents
  • Also interested and haven't found anywhere with an answer.


    On AWS I'm using EFS for shared storage between EC2 instances, specifically for storing Uploads - I'm using Elasticache with Redis for PHP session handling and RDS for database.

    The part I'm struggling with is what to do with the cache. I've tried putting it on EFS, so it's shared between EC2 instances, but I think the performance of EFS is not really sufficient for this (Quick Repair and Rebuld takes about 8 times as long as storing cache file locally).

    I've had random issues when trying to not share cache between instances so I'm not sure how to approach this problem.

  • Hi Thomas, 

    Since I've posted this I've been playing with a deployment in AWS using ECS and Docker (thanks to thishttps://github.com/esimonetti/SugarDockerized).

    I did something similar to what you did, I'm using EFS for cache, sessions uploads, but I created separate EFS mount points and for the cache EFS I had to use the provisioned mode with 3MB/s throughput.

    It works mostly OK, but first time it builds the cache from scratch it can take a while. I also dislike the EFS performance but AFAIK there are no other good alternatives in AWS.

    My idea was to never run a repair+rebuild in this setup but instead to run it on developer machine and push all changes to GIT (from which then we would deploy). This is just an idea and I still have to experiment to see if it works.

Reply
  • Hi Thomas, 

    Since I've posted this I've been playing with a deployment in AWS using ECS and Docker (thanks to thishttps://github.com/esimonetti/SugarDockerized).

    I did something similar to what you did, I'm using EFS for cache, sessions uploads, but I created separate EFS mount points and for the cache EFS I had to use the provisioned mode with 3MB/s throughput.

    It works mostly OK, but first time it builds the cache from scratch it can take a while. I also dislike the EFS performance but AFAIK there are no other good alternatives in AWS.

    My idea was to never run a repair+rebuild in this setup but instead to run it on developer machine and push all changes to GIT (from which then we would deploy). This is just an idea and I still have to experiment to see if it works.

Children
  • Hi Rafael,

    Glad to hear there's someone working on something similar - thanks for sharing your experience! I find there are very little documented cases of how these setups have been created by others.

    Thanks for the dockerized setup link, i'll check it out.

    Yeah so far all seems fine using EFS, apart from the initial repair and rebuild which is slow, apart from that I haven't seen any impact on performance BUT this is only in acceptance at the moment, so there's not the same kind of load that there will be when it goes to production - this is my fear.

    I'v also tried building the cache on one instance and using rsync to sync it to other instance, but this also creates issues for me.

    I think for now I'm going to proceed on the basis of using EFS for the cache - I'll let you know how it goes in production!