Hello Joe -
There are various guidelines for acceptable bit error rates. For example, the IEEE has a guideline. I do not remember what it is, but I seem to recall that it is much lower than 1 in 5,000. I think it is more like 1 in a billion.
The IEEE guideline is oriented toward what a very healthy LAN should be able to attain on a cable. In most real world networks, there are higher levels of errors which don't matter much, and so few network administrators go to the bother of trying to achieve the error level of the IEEE guideline.
I would be surprised to learn that the IEEE guideline takes into account the possibility of duplex mismatches. For this reason and the reason I gave in the previous paragraph, others recommend much more liberal bit error rates.
At
Cisco points out that you should expect a lower error rate on full-duplex links than on half-duplex. They go on to say that an error rate of one percent is acceptable and that at an error rate of 2-3 percent there may be a noticeable performance degradation.
Hewlett-Packard has a document at
that covers somewhat similar ground to the Cisco document I referenced above. H-P's rule of thumb is to aim for no more than one error packet in 5,000.
Why are Cisco's recommendations and H-P's different? The error rate at which the user will notice performance degradation depends very much upon the nature of the traffic. Factors such as Transport window size, retry latency, clustering of bit errors (that is, are they bunched together or spread out?), and end-to-end latency determine the extent to which a particular bit error rate will result in performance degradation. Cisco and H-P are likely modeling their math on somewhat different traffic assumptions.
A very simple and easy approach is to let the ProCurve Fault Finder (look for "FFI" entries in the Event Log) feature tell you whether you have too many errors or not. You can adjust the Fault Finder sensitivity based on whether you would like to be aggressive or not in achieving a very clean LAN.
Regards,
Ralph