Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Shaun E on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

1 bit VS. 1 byte clarification 4

Status
Not open for further replies.
Aug 21, 2002
31
US
OK the way i understand a bit versus a byte is that it takes 8 bits to create one byte. So therefore a 100 megaBIT network connection is equal to 100/8 so in essence it is really a 12.5 megaBYTE network connection. Am i getting this correct? This is why it takes 2 min to transfer a 200 megaBYTE file instead of 2 seconds? Thanks for the clarification.
 
Bits and bytes assumption are correct but you left off the overhead.
For instance use serial:
1 start bit, 8 data bits, 1 parity bit, 1 stop bit.
In the case of networking the actual data is packaged with headers and trailers you never see and collisions with other transmits cause the data to be re-transmitted. And the operating system has delays built in. It sends data in packets that increase the overhead.
Ed Fair
unixstuff@juno.com
Any advice I give is my best judgement based on my interpretation of the facts you supply. Help increase my knowledge by providing some feedback, good or bad, on any advice I have given.
 
You also have to remember that network speeds are a theoretical maximum. Many factors can cause degredation in speed (number of users at a perticular time, electrical interference, cable quality, etc.). It is rare to reach the full rated speed of your connection.
 
Rentvillas,
Yes, a 100mbps network would have a theoretical maximum of 12.5 MByte/sec ("theoretical" as OnlineIT mentioned). However in your analogy of a 200MB file, it would be 16secs vs. 2secs, not 2min vs. 2secs.

You'd be lucky, however, to get that file over in less than a minute. This is because the read/write process from hard drives to protocol and back will cause a slight bottleneck. Plus, the fastest hard drives only average around 40 MByte/sec transfer rates during sequential reads. But in normal setups with average fragmentation, a more realistic estimate would be around 10MByte\sec.
~cdogg

"The secret to creativity is knowing how to hide your sources."
- A. Einstein
 
In theory you have a theoretical speed, but what speed you will reach in reality is another thing. You have signal loss over distance, Signal to Noise Ratio, Interference, Net congestion, packet size, etc. If you compress a file it might be faster. If you do not like my post feel free to point out your opinion or my errors.
 
So what would a 56K modem transfer in Megabytes/Sec? ~Apex1x
Sattinger's Law:
It works better if you plug it in.
 
Whenever you're working in bits, you simply want to divide the number by 8 to get bytes.

So a 56kbps (kilobits per second) connection would max out at 7KBps (kilobytes per second).

However, when dealing with 56K connections, there is one important exception. The fastest US standard on phone lines is 53,300bps or 53.3kbps. They reserve 2.6kbps for other reasons unknown. So, the max is actually closer to 6.7KBps.

There are other specifics we won't get into about how a 56K modem works. Just know that you're not always connected at the max speed of 53.3kbps 100% of the time. The speed shifts up and down throughout the connection...
~cdogg

"The secret to creativity is knowing how to hide your sources."
- A. Einstein
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top