The only case in which you can risk this type of overdesisn of a table is if the data structure is such that you know positively that all data fileds will never be filled to capacity. For instance if you had a customer table and 20 of the fields would only be filled in if the customer was a lowyer and 20 differnt fields would only be filled if the customer was a doctor, then going over the capacity is OK as long as the longest possible subset of data is less than 8060 bytes. Usually however, this is a mistake made by someone who usually isn't aware of the maximum and who has no idea how big the fields should be so makes them artificially wide to accomodate any possible data.
Choices to fix are as follows:
If this is production database, find the max characters stored in each field and see if you can reduce the size of the fields down to where the total is under the max. YOu might also want to test what happens if you deliberately input a record that is too long, has the user at least been given an error message or does he think heis data was entered?
If it just in the design phase, you will need to analyze the table and see where reductions in field length are possible and if they will get you under the limit.
If reducing the field sies to get under the max record size is not possible, the only safe method I know of to fix this is to split the table into two or more tables that have a one to one relationship.
Of course, in either case you'll also have to check and adjust, where needed, the code that references the tables when you do this, so it is a major effort to make this kind fo fix.