I'm trying to take some benchmarks on network storage devices (specifically a buffalo terastation), so I'm copying huge directories onto it straight through from a w2k server to the nas box, no switch, using a trendnet gigabit card and a 3ft belkin cat6 cable. And I'm using perfmon to graph the throughput, but I don't know if I'm doing my calculations correctly.. it seems like I'm off somewhere.
Perfmon is averaging 3,000,000 (7 digits), and it says the unit of measurement is bytes/second. So if I take that, multiply by 8 to get bits/second, then divide by 1000 (or 1024) to get kilobits/second, and then divide again by 1000 (or 1024) to get megabits/second, I come up with 24mbps (or 22.88mbps).. Is this speed the same unit of measurement as the 1000mbps stamped on a gigabit network card as the maximum theoretical throughput?
If my numbers are correct, that seems awefully low. I realize there are other factors such as the hard drive speed and tcp/ip overhead, but I would think 70megabit would be reasonable even for a 100mpbs lan.
Any ideas/corrections?
Perfmon is averaging 3,000,000 (7 digits), and it says the unit of measurement is bytes/second. So if I take that, multiply by 8 to get bits/second, then divide by 1000 (or 1024) to get kilobits/second, and then divide again by 1000 (or 1024) to get megabits/second, I come up with 24mbps (or 22.88mbps).. Is this speed the same unit of measurement as the 1000mbps stamped on a gigabit network card as the maximum theoretical throughput?
If my numbers are correct, that seems awefully low. I realize there are other factors such as the hard drive speed and tcp/ip overhead, but I would think 70megabit would be reasonable even for a 100mpbs lan.
Any ideas/corrections?