Redis/Memcache and file cache

In the advanced sugar configuration options, there is the ability to point "external cache" to redis or memcached. What is actually being redirected? This doesn't replace, or even seem to augment, the local file cache.

How is this intended to be handled when running in a horizontally scaling environment? Is the ./cache/ directory intended to be shared between every front end node, and if so, is it possible to actually put this into a store like redis or memcache - or to generally offload it entirely?

Or is it safe to have each web node maintain it's own local copy of the cache directory?

Our intended deployment leverages code-deploy on aws in autoscale groups behind ELBs. Right now we're trying to leverage EFS mounted to the ./cache/ dir on all web nodes, but the overhead of EFS transactions is egregious for this task.

Is there a better way to handle this?

Parents
  • Also interested and haven't found anywhere with an answer.


    On AWS I'm using EFS for shared storage between EC2 instances, specifically for storing Uploads - I'm using Elasticache with Redis for PHP session handling and RDS for database.

    The part I'm struggling with is what to do with the cache. I've tried putting it on EFS, so it's shared between EC2 instances, but I think the performance of EFS is not really sufficient for this (Quick Repair and Rebuld takes about 8 times as long as storing cache file locally).

    I've had random issues when trying to not share cache between instances so I'm not sure how to approach this problem.

Reply
  • Also interested and haven't found anywhere with an answer.


    On AWS I'm using EFS for shared storage between EC2 instances, specifically for storing Uploads - I'm using Elasticache with Redis for PHP session handling and RDS for database.

    The part I'm struggling with is what to do with the cache. I've tried putting it on EFS, so it's shared between EC2 instances, but I think the performance of EFS is not really sufficient for this (Quick Repair and Rebuld takes about 8 times as long as storing cache file locally).

    I've had random issues when trying to not share cache between instances so I'm not sure how to approach this problem.

Children
  • Hi Thomas, 

    Since I've posted this I've been playing with a deployment in AWS using ECS and Docker (thanks to thishttps://github.com/esimonetti/SugarDockerized).

    I did something similar to what you did, I'm using EFS for cache, sessions uploads, but I created separate EFS mount points and for the cache EFS I had to use the provisioned mode with 3MB/s throughput.

    It works mostly OK, but first time it builds the cache from scratch it can take a while. I also dislike the EFS performance but AFAIK there are no other good alternatives in AWS.

    My idea was to never run a repair+rebuild in this setup but instead to run it on developer machine and push all changes to GIT (from which then we would deploy). This is just an idea and I still have to experiment to see if it works.

  • Redis and Memcache can be used for the application cache (i.e. BeanFactory) which will have a huge performance gain over using disk/db, but you're right, it's not used for metadata/languages/smarty/etc.

    We're also looking to move from EC2 into ECS/CodeDeploy, but it sounds like mounting the cache folder is the only option.  My first thought was to just deploy the same code to both instances (web and cron) and let each of them manage their own cache folders, but my fear is that they'll have conflicts when it comes to the metadata_cache hashes.