Hi all,
Here's the situation. We house medical claims data and desperately need to speed up transactions because some of the tables are 50+ million rows. One suggestion was to re-visit our indexing.
The data is loaded in monthly batches and each batch has it's own unique ID which we've indexed. Do we need to re-create or rebuild the index after adding next month's batch?
Also, is there any rule of thumb for the # of possible entries for an index? Would it be advisible to add an index to something like a diagnosis code (there are aprox 15K possible codes).
Any help would be great!
Thanks.
Here's the situation. We house medical claims data and desperately need to speed up transactions because some of the tables are 50+ million rows. One suggestion was to re-visit our indexing.
The data is loaded in monthly batches and each batch has it's own unique ID which we've indexed. Do we need to re-create or rebuild the index after adding next month's batch?
Also, is there any rule of thumb for the # of possible entries for an index? Would it be advisible to add an index to something like a diagnosis code (there are aprox 15K possible codes).
Any help would be great!
Thanks.