SBendBuckeye
Programmer
Hello all,
We are getting ready to redo an old text file system in which each data row was basically a slab of data (think icons in Office) which was parsed in code based upon various positional codes in the beginning of the data. If we normalize the data into Sql Server we are going to end up with a few dozen little tables and a bunch of left joins because much of the data is variable and may or may not exist for a given row.
Are there performance considerations with that many joins? I've always worked under the assumption that normalization is almost always the best way to go. To make things less cumbersome, would we be wise to create some views for common combinations or are there other ideas? Thanks in advance for any ideas and/or suggestions!
We are getting ready to redo an old text file system in which each data row was basically a slab of data (think icons in Office) which was parsed in code based upon various positional codes in the beginning of the data. If we normalize the data into Sql Server we are going to end up with a few dozen little tables and a bunch of left joins because much of the data is variable and may or may not exist for a given row.
Are there performance considerations with that many joins? I've always worked under the assumption that normalization is almost always the best way to go. To make things less cumbersome, would we be wise to create some views for common combinations or are there other ideas? Thanks in advance for any ideas and/or suggestions!