Amazon is now offering the ability to add distributed in memory caching to your applications deployed in the EC2 cloud.
This looks like a drop in replacement for the very popular open source cache implementation memcached (they have memcached protocol compatibility). It is quite interesting how Amazon leverages the in house statistics that it collects on applications being deployed on its infrastructure to decide on what new service to offer – case in point caching and many others like the RDS service too. They do continue their march up the stack – into applications and PAAS. The boundaries between PAAS and IAAS vendors are surely blurring.
Some interesting points:
- Bringing management, monitoring and automation to caching as well – via new cache management AWS API. I am sure (if they have not already done so)– they would be including support for this in the cloud formation templates
- Interesting security angle – security groups for accessing cache – This seems to be a clear value add over vanilla memcached
- They support sharding of keys to cluster nodes –in order to dynamically resize your cluster, the client library should use a consistent hash function – this is pretty standard in the memcached world as well
- Lots of interesting cache metrics would be available from cloud watch – which makes it easy to integrate with the AWS Autoscaling service – e.g. if a cache cluster is running @ > 85 % of memory utilization then it’s time to launch another cluster node as part of the cluster
- They refer to a caching engine – hopefully they will add support for other caching engine implementations more powerful key-value store like redis (far more feature rich compared to memcached)