Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Shaun E on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Capturing Large Amts Data - Killing Performance !!! 1

Status
Not open for further replies.
Mar 27, 2001
29
US
Need some direction. i am capturing large amounts of user info from my web site(s). Page hits, time on page, URLs and the likes. after so long, maybe halp a day, the capture table becomes bloated and system performance takes a dive. First, what causes this, is it time due to indexing or what? Second, how do i alleviate this situation. do i need to archive it to another table or is there a better method? The method given hopefully won't give me a performance hit as great. any and all info is greatly appriciated.

JKusck
 
We need details, how many records per day are we talking about? What exactly do you mean by bloated. What kind of machine are you running on, how much memory, cpu speed, number etc.

 
we are dealing w/ 250,000 - 700,000 records. this is on a dual-processor P3-850 w/ 2gb memory. what i mean about bloated is that after several hundred thousand records we seem to take a performance hit in response times for sql as well as other system functions, or so it seems. we have disabled the capturing of this data until we can resolve this issue or deem that it is not an issue of a table being too large.
 
If I get you correctly, there are a large number of inserts happening one right after another in a near-constant fashion.

If this is true, check your server's Recovery Interval. I've found that, when doing a huge number of inserts without breaks, that the system pauses to flush the buffers and this brings nearly everything to a standstill.

If raising this solves the problem, then step back and consider that raising this value will mean taking longer to recover from backups on a system failure. Robert Bradley
Coming Soon:
 
Wonder if i should try putting this table in a file/filegroup and backup/append to it every hour or two. then before our nightly backups and warehousing take off, i can restore the table from the file/filegroup. Not up to specs on this process so i may be on a dead end track. if i'm on to something, need to check into it further through some whitepaper or such. thanks for the great info thus far.

J. Kusch
 
Check your database size and percent full. If it is nearly full, and you expect significant additional activity, you might wish to manually expand it by 40% or so now. The process of automatically growing a file can cause a performance hiccup, though its just a hiccup - its unlikely the delay would last for hours.

Also, if you have indexes that are not sequential (explanation in a moment), try padding the index and increasing the fill factor to reduce page splits. By "sequential" I mean the index entries are somewhat chronological in nature. For example, an index on an Identity column where the additional records are essentially packed onto the end. In this scenario, having free space at each leaf level is not as important as when new record's index entries are spread across an entire index (for example, an index on LastName). Robert Bradley
Coming Soon:
 
So, is putting this table in a file/filegroup and backup/append to it every hour or two not a feasible option?
Have delved deep into the advise given above, thanks it was educational, but all looks well on those points. we are going to re-enable the script to start this data capturing process up again today ... so i will get to see first hand what these "said" performance issues look like. Still would like to know about file/filegroups though for my own FYI. once again, any insight is better than no insight. thanks to all repsonding back
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top