Imagine a system that has to process a lot of information very very fast. Also, when the information has the same target (person, device… whatever) and the same action type for that target it has to be done sequentially, one action has to be blocked by the other. With these requirements I have implemented the lock like this:
– Create a table with columns (ObjectId, ActionId), and also these two columns are the PK. This table has to be filled with the pair before it is being processed. After the pair (ObjectId, ActionId) insertion this can be cached in order to avoid checking its existence again. Or maybe this pair could be inserted through a trigger/code/ etc… up to you. The important thing is that the pair has to exist before using the lock.
– After that we have a table with each objectid and each action type that can be executed with the object.
In order to block we use the following select:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED GO SELECT * FROM [PessimisticLocks] with (xlock, holdlock) --the hint makes SQL Server to block exclusively the row even for reading. where ObjectId= @p1 and ActionId= @p2 -- always block an unique result
When the transaction begins we call that select at the begining, that way the row (ObjectId,ActionId) is locked exclusively for that transaction. If another transaction with the same (ObjectId,ActionId) pair is executed it gets locked in the select and waits patiently to the commit of the first transaction. If a different pair enters it runs without being locked.
Some people sees the solution quite sloopy, but I like it. It performs very well, uses readcommitted as isolationlevel which implies better performance than serializable, works in a cluster and it is easy to implement.
What do you think?