There is a little lie about the serializable transaction isolation level. In many places you can read that serialization level gives you the illusion of executing the transactions one behind the other, one at a time. Nothing further from the truth.
In that link we can read: SERIALIZABLE is the most restrictive isolation level, because it locks entire ranges of keys and holds the locks until the transaction is complete. It encompasses REPEATABLE READ and adds the restriction that other transactions cannot insert new rows into ranges that have been read by the transaction until the transaction is complete.
As you can see all is about locking, but that locking doesn’t mean that a query has to wait until the locking transaction commits. That “locking” means it is protected.
Let’s give an example with a little test database with a table People (id,name):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
INSERT INTO [TestDB].[dbo].[People]
If we execute that script in two separate windows from the set transaction to the first EXEC sp_lock we see the following:
We have two shares locks in the same data, I have a bad feeling about this :). Let’s execute the insert of the first window and see the locks now…
Upssss… it is trying to get the exclusive lock, notice that CNVT… looking into the documentation it means:
The Status column shows if the lock has been obtained (GRANT), is blocking on another process (WAIT), or is being converted to another lock (CNVT). A lock being converted to another lock is held in one mode, but is waiting to acquire a stronger lock mode (for example, update to exclusive). When diagnosing blocking issues, a CNVT can be considered similar to WAIT.
So it is trying to get an exclusive lock but it is not able to do it because the second transaction has the shared lock. Now if we execute the second insert it gives the following message in the first one:
Msg 1205, Level 13, State 48, Line 1
Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Ugly, isn’ t it? It has to kill one transaction in order to protect the data selected by the other one. But that is not giving the illusion of executing in serial.
So, if you need to execute those queries as if they were executing one behind the other you have to create a real lock in the first select using the hint: with (xlock,ROWLOCK). That prevents the second query to be able to read.
Other option to mitigate this problem is to reduce the amount of information “locked” by the first select.