Question - Is redis faster than file cache

@james.sharpe raised a good question we have had never (is that grammatically correct?) tested , the idea is that instead of using “file” cache we used a remote server, like “redis”

we launched a redis server using elasticache on the same subnet as the ec2 where we run fat.eramba.org and found out the following loading speed times:

This feels like its as expected as you are adding a network hop to access redis versus a lookup in the local file cache which is probably already in RAM. I’d be interested to see how it performs with redis running on the node itself.
I’m not that familiar with Cake but am familiar with distributed systems and I think the reason why you’d want to use Redis over the file cache is for scalability of the number of instances of the app server. If each app server has a local file cache then you are going to run into consistency issues where multiple servers have different disk caches versus redis where all servers are reading from the same cache. It also helps with the cache invalidation problem of ensuring that all app servers can invalidate the cache in a consistent manner so you don’t get different views depending upon which app server you hit.

yes we did a test like this in the past with memcache, a tiny wee faster. i think the redis concept you brought will help when we run multiple apache’s because they must all talk the same cache source otherwise kaput.

ok , yes i now read this - same i was thinking haha!