Sunday, June 14, 2015

Intro to Memcache

Memcached is an in-memory, high-performance, distributed memory object caching sytem that also performs as a key-value pairs data store. The main advantage of sitting in memory is speed over data stores that sit on disk. Data is as simple as putting and getting values by key, and anything that can be serialize can be put into Memcache.

Memcache is commonly used for storing data returned from:

  • Data store query results
  • API calls
  • User authentication token and session data
  • Other computation results

Why Memcache?

So why Memcache? It's easy enough to use the data store to share data across instances, or even use it to do the caching of API calls. Simply put, Memcache makes your app more performant at a much lower cost. Anytime you hit the datastore, you're paying the cost of executing the query and CPU tied to it. Query are also computationally expensive.

Rather than having your applications access the data store every time you need to fetch data, they can access the data through Memcache, in memory, by key. Memcache allows you to store the results of a computation against the data store offloading the processing power and latency in the background so that your users aren't bogged down. Let's take a look at a general Memcache usage pattern, below:

How Memcache Coordinates Reads with the Data Store

  1. Check if Memcache value exits
  2. If it does, use the cached value directly
  3. Otherwise, fetch the value from the data store and write the value to Memcache because it will most likely need to be accessed again soon

How Memcache Coordinates Writes with the Data Store

  1. Invalidate the Memcache value for this specific entry or entire Memcache
  2. Write the value to the data store
  3. Optionally update the Memcache entry

Important Memcache Features

Atomic Operations

Memcache can also be used as a simple optimistic, distributed lock mechanism. Say you want to update a value, but are concerned about others updating it at the same time. You can use the method getidentifiable first, to return an identifiable object. This object will contain the value of what you need in addition to a version of this value, which you can treat as a timestamp. So rather than calling put later on, you can call putifUntouched, and then pass it the identifiable object as a parameter. Memcache will only update if this value hasn't been changed since your last call to getidentifiable.

When dealing with counters, best to utilize increment and incrementAll.

Memcache Batch Operations

Batch operations are another way to further improve performance. Rather than having to make hundreds of network calls to read 100s of objects, you can alleviate the network overhead by batching them into one one call. The main limitation, however, is that the batch size for your call / size of the data must not exceed 32 megabytes[1].

Asynchronous Calls

Because Memcache is shared across your applications, any one of them could end up overloaded the server. For applications that are sensitive to latency, you can call the asynchronous version of the service so it isn't blocked when making the API call. When it's no longer required, you're able to return back to syncing with the API call.

Caveats

Memcache is Volatile

Compared to a traditional data store, Memcache is volatile. This means there's no guarantee that data you put into Memcache will be accessible later on. The reason is either the entry has expired or Memcache has reached capacity and old data has been evicted. Your data also gets wiped if Memcache crashes. It's important that your application always has the data store to fall back on, and only rely on Memcache to imporve performance.

If you really want persistence behavior, you may implement the write-through logic backing Memcache with a datastore, or utilize Objectify or NDB.

Multiple Memcache Servers Must Be Managed

When you have multiple Memcache servers, it's important that you are running a proxy as a local process on the web server or on a separate server entirely[2]. The Memcache proxy client will have a connection to all of the different Memcache tiers and datacenters, keep them in sync, and replicate the deltes to make sure the servers are coherent to different locations.

I would recommend running Memcache proxy on its own server when you have servers that span the west and east coast.

Must Be Careful What You Cache

Make sure you aren't caching personal data or content that can be accessed by the wrong parties.

Multiple Connections Can Kill

Say you have 80-100 processes on each web server, with each data center running thousands of servers. You could easily grow to hundreds of thousands of connections to any Memcache server. To alleviate this, look into utilizing UDP over TCP for your clients and servers, and dynamically adjust the timeout and # of keys you send out in a UDP multiget.

Memcache is Not Transactional

Please refer to the section that goes into "Atomic Operations" above.

References:


1. ^ Google Cloud Platform Documentation (11 June 2015). "Memcache Quotas and Limits"
2. ^ Moore, Nick (22 May 2013). "The Benefits of Memcache Proxying by Nick Moore"

No comments:

Post a Comment