Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

database design with lots of data

Status
Not open for further replies.

mgriffith

MIS
Jul 3, 2001
177
US
i have a question...i am currently in the midst of an application using only a few tables that are linked together. one of the tables is possibly going to have in the upwards of 1,000,000 rows (if my calculations are correct) when the data is transferred from the old database setup (which only has about 4000 rows per table, but about 150 columns). the new table only has about 8 columns (some ints, datetimes, and varchar(50)s) but i am wondering if there is going to be a big performance hit when the data gets in there....any idea on how bad the queries will be? the server is pretty quick (dual p3 zeons and 1.5 gigs ram), but it's hosting quite a few other databases that are used throughout the day.

should i be worried? mike griffith
----------------------------
mgriffith@lauren.com
mdg12@po.cwru.edu
 
SQl should handle that number of records easily. But you will want to carefully examine the indexing structure. You'll want to make sure that you are indexing the fields most often used in the where clauses, but beware of the fine line between speeding up performance on select statements and slowing down data entry. I would try to avoid a multicoumn clustered index on a table this large for instance unless this is read-only data that only gets periodic updates from a DTS package.
 
I am running a cluster with 4 processors, 4 GB of ram and currently have 147+ GB of data. One table in my database has 430+ million rows. We have not yet seen any 'noticeable' performance hit compared to when we first created the database.

BUT, a lot depends on the type of queries you will be running.

-SQLBill
 
thanks...that makes me feel better

the queries (all using sp's) search smaller tables, and then are linked to the bigger one through primary keys...some of the links are pretty complex, but shouldn't be too bad

that data entry is done right into the big table...i'm going to reexamine that and see if there's anyway i feel that i can give it a performance boost

thanks for the advice and the comments mike griffith
----------------------------
mgriffith@lauren.com
mdg12@po.cwru.edu
 
You might also want to see if you can redesign to link through integer fields. Those links have better performance.
 
I use half your hardware [Dual PIII500Mhz with 512 MBRam] and it rocks with tables that have 1.5M records.
One thing you should look at is if you can run the database in non logged mode[ if that is possible considering your recovery needs]. This should make it use less resources on the system.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top