Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

using nolock 4

Status
Not open for further replies.

sds814

Programmer
Feb 18, 2008
164
US
I have seen in some of my co-workers stored procedures, select queries the use of nolock. Just wondering when is it necessary.

Thanks.
 
I rarely use them unless I'm debugging something. The reason for this is that you can get "dirty reads." This means you can return data that in a way, is incorrect. If a developer codes some operations into a transaction, and somewhere in the code the transaction has to be rolled back because of say, an error, using NOLOCK can allow you to read that data that ends up being rolled back.

See here, under READUNCOMMITTED for more details:
 
SQL Server uses a lock to prevent multiple users from making conflicting changes to a set of data. This consequently comes with a performance hit. However sometimes this lock is not necessary eg. lookup tables. Using the NOLOCK directive removes the unnecessary overhead.

 
Thanks RiverGuy.

If you could please explain what higher concurrency means in this sentence: " Allowing dirty reads can cause higher concurrency, but at the cost of reading data modifications that then are rolled back by other transactions."

Does it mean that more data is available to the user as nolock will allow the user to read data that could be even rolled back?
 
It doesn't mean that more data is available. It could be that more, less, or different data is available. If you used a lot of NOLOCKs, you may be able to get around a lot time transactions wait on locks. So, the overall system could be perceived of as performing faster.
 
Actually, I shouldn't say "the overall system could be perceived of as performing faster." It could be that the system performs fine in one regard. But let's say you've added some reports which typically take a long time to execute. It could be that they perform better if they are typically waiting on other transactions and you use NOLOCK .
 
RiverGuy,

Just to jump in here and ask a quick question. I've a few reporting queries within one of my applications which always seem to suffer a performance hit when they're run on my production servers, despite the fact that dev and production set-ups are very similar.

The one difference between the two servers is that the production server has data being written into these tables on a very regular basis where on development no data is being inserted. Do you think that the performance issues are perhaps due to the queries waiting for the resources to be locked? Should I perhaps consider using NOLOCK on my SELECT queries which build the reports?

Sorry for the open ended question, I just wanted to see if I understood what you were saying.

Cheers,

Heston
 
It would be a thing to consider if you're not concerned with dirty reads. Some people will recommend it more than others. I've heard of vendors suggesting that NOLOCK always be used when querying their systems. However I would never be comfortable with such a general recommendation unless I was positive that the vendor's technical rep was thoroughly familiar with the system architecture. One way to test this would be to execute the query with and without the NOLOCK hint turned on to see the difference.

But there are also other things to consider first when looking for performance. Your production system might just be slow because it has lots of activity. You can try tuning the queries if there is room for improvement. You can look at tuning the indexes if you would see improvement there and the tuning did not affect the performance of system. Then, of course there is always the option of modeling and loading the data into a data warehouse type structure where the resources are available and it makes sense from a business perspective.
 
Hey RiverGuy, thank you.

I'm fairly happy that these particular queries are fairly well optimized and the data is indexed well, I spent a fair amount of time on these very boards a couple of months ago working on that and they do work well in development.

Let me try and understand the consequence of a dirty read, what does that exactly mean? that I might query half-committed data which is only halfway through being INSERTed into the table? if that's the case then it is of no real consequence to me really, data is never modified or removed from the table either so there is no risk there.

I'll do some testing and see what happens. After reading around it seems that NOLOCK has been depreciated in 2k5 is that correct? I'm looking to use read_uncommitted isolation on the transaction instead which I understand has the same effect, is that correct?

Many thanks,

Heston
 
It's still available in SQL 05. This is what the article says:
For UPDATE or DELETE statements: This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.

The problem isn't just about the data being half committed (depending on what you mean by half committed). It has to do with the fact that many systems wrap multiple statements into single transactions. If a transaction has five steps, and an error is reached at the last step, it is typical to ROLLBACK or undo all of the other steps.

If you want to try a simple example, create a table on a test database like this:
Code:
CREATE TABLE Test1
(Col1 INT)

Then, in a query window, execute the following query:
Code:
BEGIN TRANSACTION
	INSERT INTO Test1 SELECT 1
	WAITFOR DELAY '00:00:10'
ROLLBACK TRANSACTION

SELECT * FROM Test1

That inserts a record, waits ten seconds, then does a rollback on the transaction. Finally it selects from the table, which will return zero rows.

However, if you open another query window while that is running and do a SELECT * FROM Test1 WITH (NOLOCK) you will see that record that is inserted but ends up being rolled back.

 
Excellent example!

-George

"The great things about standards is that there are so many to choose from." - Fortune Cookie Wisdom
 
Ok, thank you RiverGuy, that does make sense. When I said 'half committed' that is essentially what I meant, if a transaction has several steps and it's only process through a few of them.

I don't think that these scenarios are really of any consequence to me in my use case so this might be a nice way to go.

I'll do some testing and see what sort of results it yields for me.

Many thanks RiverGuy, I appreciate it.

Heston
 
A well-designed application shouldn't need to use NOLOCK except for intentionally-designed concurrency tables such as queues.

Using READ UNCOMMITTED transaction isolation (which is what NOLOCK does) can make up for poorly designed applications that are experiencing a lot of blocking. But I think that while using it as a matter of course can work for a long, long time, some day it may hurt you badly. What is the consequence for reporting or acting on bad data? It depends, doesn't it?

I would avoid NOLOCK as much as possible in the following situations:
- If it's used in medical care and a single wrong value could harm
- If it's used for legal purposes or in any situation that could involve litigation where a single wrong value could create unacceptable risk
- If a single wrong value could cause great monetary loss

P.S. As an example of how to design applications well, one application design best practice is to do insert, update, and delete in the same order in every SP, so that SPs acquire the same types of locks in the same order, thus blocking less often. Fewer blocks and shorter locks means no need for patching the thing up with NOLOCK later.
 
ESquared,

Thanks for that input, you certainly make a great deal of sense. I am lucky enough that in this scenario the data being reported is of no real consequence and wrongful data will in no way cause a dis benefit to anyone of any substance so we're fairly safe.

I see your point about ordering the table manipulation queries within the SP's, at the moment I have only a single process which writes data to my given table I can be sure the locking conflicts are not caused by multiple queries bashing against one another, its a concurrency issue. The single query can be executed from multiple clients in the same time frame and because the inserts take a few moments they lock up against one another.

I'm working on making my insert queries more efficient to minimize this risk and will only be using NOLOCK on my reads just so the user experience isn't effected if the tables are locked and being written too.

Does that make sense? These tables are large and written too on a regular basis (many times a minute) and sometimes you can sit and wait for a report to generate and it'll take 10 minutes, click it at another time and it'll take 4 or 5 seconds, presumably because the SELECT is waiting for locks on the resource to be dropped.

Eventually we'll get around to redesigning the structure of the DB but at the moment we're just looking to patch it up and keep us going for a few months.

Heston
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top