m0nkey
MIS
- Mar 20, 2003
- 31
I have some large tables that i have as log/history file entry tables. The tables contain anywhere from 10 million to 50 million records that do not have a unique key as it is not necessary but have a clustered index on the account number. this is not a unique key, but could be if i add the date_entered column with that column. I do daily inserts of 1 million + records into this table and am always deleting records that have been in there longer than 90 days. I run select queries and bounce off this table as a "where acct_no not in (select acct_no from log_repository)" that run well but when i insert into this log/history table it takes forever, 1 1/2 hours. I have dropped the indexes and maintained the clustered index because when i drop both, creating the clustered index takes even longer (twice as long). The table only has 5 columns in it: acct_no, zipcode, insert_date, batch_no, campaign_no. i have a clustered index on acct_no and an index on insert_date and campaign_no. the acct_no is a char(10) field and the zipcode is a char(5).
Like i said, this table has a daily select and insert. it is not open to the public, just developing. it is a log/history table. i insert/delete in batches with the zipcode as start and end variables starting at 00000 and incrementing by 1000 every batch.
Does anyone have any sugessions as to optimize this table update/insert/delete as recreating this clustered index on the account_no is a chore but from the reading seems like the most efficient way.
Thanks in advance,
Mark
Like i said, this table has a daily select and insert. it is not open to the public, just developing. it is a log/history table. i insert/delete in batches with the zipcode as start and end variables starting at 00000 and incrementing by 1000 every batch.
Does anyone have any sugessions as to optimize this table update/insert/delete as recreating this clustered index on the account_no is a chore but from the reading seems like the most efficient way.
Thanks in advance,
Mark