There is a little lie about the serializable transaction isolation level. In many places you can read that serialization level gives you the illusion of executing the transactions one behind the other, one at a time. Nothing further from the truth.
http://msdn.microsoft.com/en-en/library/tcbchxcb(v=vs.80).aspx
In that link we can read: SERIALIZABLE is the most restrictive isolation level, because it locks entire ranges of keys and holds the locks until the transaction is complete. It encompasses REPEATABLE READ and adds the restriction that other transactions cannot insert new rows into ranges that have been read by the transaction until the transaction is complete.
As you can see all is about locking, but that locking doesn’t mean that a query has to wait until the locking transaction commits. That “locking” means it is protected.
Let’s give an example with a little test database with a table People (id,name):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
SELECT *
FROM people
GO
EXEC sp_lock
GO
INSERT INTO [TestDB].[dbo].[People]
([Name])
VALUES
('Pablo')
GO
EXEC sp_lock
GO
commit
GO
If we execute that script in two separate windows from the set transaction to the first EXEC sp_lock we see the following:
We have two shares locks in the same data, I have a bad feeling about this :). Let’s execute the insert of the first window and see the locks now…
Upssss… it is trying to get the exclusive lock, notice that CNVT… looking into the documentation it means:
The Status column shows if the lock has been obtained (GRANT), is blocking on another process (WAIT), or is being converted to another lock (CNVT). A lock being converted to another lock is held in one mode, but is waiting to acquire a stronger lock mode (for example, update to exclusive). When diagnosing blocking issues, a CNVT can be considered similar to WAIT.
So it is trying to get an exclusive lock but it is not able to do it because the second transaction has the shared lock. Now if we execute the second insert it gives the following message in the first one:
Msg 1205, Level 13, State 48, Line 1
Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Ugly, isn’ t it? It has to kill one transaction in order to protect the data selected by the other one. But that is not giving the illusion of executing in serial.
So, if you need to execute those queries as if they were executing one behind the other you have to create a real lock in the first select using the hint: with (xlock,ROWLOCK). That prevents the second query to be able to read.
Other option to mitigate this problem is to reduce the amount of information “locked” by the first select.
Regards.
It has the do with how the range lock works (it always includes 1 more in the range) for instance if you tried this you would have no deadlock
window 1 :
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT *
FROM People where Id = 1
SET IDENTITY_INSERT People ON
INSERT INTO People
(Id,[Name])
VALUES
(1,’Pablo’)
SET IDENTITY_INSERT People OFF
window 2
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT *
FROM Person where Id = 3
SET IDENTITY_INSERT person ON
INSERT INTO Person
(Id,[Name])
VALUES
(3,’Pablo’)
SET IDENTITY_INSERT person OFF
if the second was id 2 the deadlock would have occurred 🙂
In previous comment window 2 should have been table people not person
[…] Sometimes we have contention problems in database because of the transaction isolation level when we…. To avoid this, if the business allows it, it is useful to “downgrade” the isolation level of .NET transactions. By default, in the .NET framework, it is set to Serializable, but usually we want to use ReadCommited or anything lower instead. […]