When you have a cluster of web application servers, you often need to coordinate the activity of your servers to avoid the same expensive work being done at the same time when a condition triggers it.
Most people use memcached as a simple key/value store but it can also be used as a simple distributed lock manager: along with the put(key, value) operation, it also has an add(key, value) operation that succeeds only if the cache wasn't already holding a value for the key.
Locking then becomes easy:
if (cache.add("lock:xyz", "1", System.currentTimeMillis() + 60000)) { try { doSomeExpensiveStuff(); } finally { cache.delete("lock:xyz"); } } else { // someone else is doing the expensive stuff }
The code above tries to get the lock by adding a dumb value for our lock's idenfitier, with an expiration of one minute. This is the lock lease time, and should be more than the estimated maximum time for the lengthy operation. This avoids the lock being held forever if ever things go really bad such as your server crashing.
Once the operation is completed, we delete the lock, et voilĂ .
If you want the system to be rock-solid, you should check that you still own the lock before deleting it (in case the lease time expired), but in most cases this simple approach works nicely.
And if the expensive operation resets in the database the condition that triggered it, the lock should be released once the transaction has been committed to prevent a race condition in the time interval between the end of the expensive operation and the actual commit that would allow other servers to restart the same work. Spring's transaction synchronization helps doing that.
Update: as Leo points out, the above works as long as memcache doesn't decide to flush your lock to have more room. Practically, the small size and short life time of a lock should ensure this almost always works. If locking is critical though, either use a dedicated memcache server for locks or use another solution like ZooKeeper.