The answer is..... well.... complicated.
Unfortunately, the only way you're going to know for sure is to try it various ways.
Indexes will usually slow down the inserts. This is true. The question is, by how much. If you don't disable the index during the import, how long will it take?
Unfortunately, this (load time) is not the only consideration. By disabling and rebuilding the indexes, you are ensuring that the indexes are "as good as it gets". Indexes take up space in the database. When inserting data (with an index), each insert will need to update the index data. When this happens, there is an opportunity for index fragmentation.
Index fragmentation occurs when there is not enough space on the data page (which is 8 KB) to hold the data you are trying to add. When this happens, SQL Server will split the single page in to 2 pages with 1/2 the data on one page and the other 1/2 on another page.
Think about a phone book. Each page is full of people and phone numbers. When a new person gets a phone, they need to be added to the appropriate page in the book. Since each page is full, you could modify each page after the new entry so that each page remains full, or you can insert a new page at the appropriate spot in the book with the information you need.
Now imagine a phone book with each page filled to 90% (10% free space). When a new entry is made, there is already room for it. It's still possible to get page splits because there is only 10% free space on each page, but it is less likely to occur.
SQL Server implements this "concept" with something called "Fill Factor". Each index can have a different fill factor, and the value you choose depends on your data. With fill factor, you specify the value as a percentage, from 0 to 100. Please note that 0 and 100 do exactly the same thing, they fill up the data pages with as much data as possible.
When you specify a fill factor less than 100 percent, there will be some amount of free space in the pages. To accommodate all of this free space, more pages will be necessary. More pages means larger database size. The other problem with fragmentation is query speed. If your data is fragmented, the query engine will need to examine more pages before it can return the results.
My suggestion is this:
Examine the fragmentation level of your indexes before you load data. You can see what your fragmentation level is for each index with the following code:
Code:
SELECT ps.database_id, ps.OBJECT_ID,
ps.index_id, b.name,
ps.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS ps
INNER JOIN sys.indexes AS b ON ps.OBJECT_ID = b.OBJECT_ID
AND ps.index_id = b.index_id
WHERE ps.database_id = DB_ID()
ORDER BY ps.avg_fragmentation_in_percent DESC
You don't need to worry about running this query on a large or heavily used database because it's just looking at meta data.
Then take a look at your fill factors with the following code:
Code:
Select object_name(object_id) As Index_Name,
fill_factor
from sys.indexes
If the fill factor for your huge indexes is 0, then I encourage you to change it to 90 and then rebuild the index. Be aware that a fill factor of 90% will take 10% additional storage.
Then load your data. After loading, examine your fragmentation again. If it's high, then rebuild just the indexes with high fragmentation.
There are proven scripts that exist that are used by thousands of people all over the world. The best one is probably this:
-George
Microsoft SQL Server MVP
My Blogs
SQLCop
twitter
"The great things about standards is that there are so many to choose from." - Fortune Cookie Wisdom