So, I'm a ~ experienced Perl user, but I'm new to using sockets. I've been developing a tcp server & client where the client makes requests (small packets) and the server returns data (lots of it). I was developing both parts on a single machine, so I didn't run into network MTUs until later. (I was transferring strings with a max length of 999999. Ah the vagaries of youth...)
So, I woke up today and decided I would need to chunk my data. I used substr to break my long strings into 1400 (as to be under the MTU) character elements in an array and then I send each line of the array until I'm done. Then I send an effective EOF (separate packet). That seems to work pretty well.
On my client side, it sockets the server when it has a request to make and then goes into a loop to expect the results until it reaches the EOF packet. I have a packet counter on both the client and the server and I log how many are sent and received.
One of the requests is in the repeat and is made every 7 seconds. It usually works. About 1 in 10 doesn't. The server sends all packets, but the client seems to receive all but the EOF packet. I think my logic is sound, since it works most of the time... but when the client never receives the EOF, it goes out to lunch and never comes back.
If I watch a netstat loop on my port, I seem to rack up a lot of TIME_WAIT and CLOSE_WAIT connections. I'm not sure that there's a direct impact, but I think it can't be good.
So: here are the questions... I'm not forking the server on each client connection. Would that help?
The server opens a new socket when first started and then just runs. I never close and reopen it. That seems pretty standard, right?
The client opens a new socket for each request and then sends as many chunks as I have and then closes the socket. At least I think I'm closing it. Does the 'close($socket)' go in the '$client=$socket->accept' routine?
Do I need to wait/sleep somewhere to avoid slamming the server? A file I transferred was 3000ish packets, so I can't sleep very long on each loop and get anything done. Also, all of those packets seem to transfer pretty much fine. It was the final EOF packet that's problematic if anything.
Thanks,
xhonzi
So, I woke up today and decided I would need to chunk my data. I used substr to break my long strings into 1400 (as to be under the MTU) character elements in an array and then I send each line of the array until I'm done. Then I send an effective EOF (separate packet). That seems to work pretty well.
On my client side, it sockets the server when it has a request to make and then goes into a loop to expect the results until it reaches the EOF packet. I have a packet counter on both the client and the server and I log how many are sent and received.
One of the requests is in the repeat and is made every 7 seconds. It usually works. About 1 in 10 doesn't. The server sends all packets, but the client seems to receive all but the EOF packet. I think my logic is sound, since it works most of the time... but when the client never receives the EOF, it goes out to lunch and never comes back.
If I watch a netstat loop on my port, I seem to rack up a lot of TIME_WAIT and CLOSE_WAIT connections. I'm not sure that there's a direct impact, but I think it can't be good.
So: here are the questions... I'm not forking the server on each client connection. Would that help?
The server opens a new socket when first started and then just runs. I never close and reopen it. That seems pretty standard, right?
The client opens a new socket for each request and then sends as many chunks as I have and then closes the socket. At least I think I'm closing it. Does the 'close($socket)' go in the '$client=$socket->accept' routine?
Do I need to wait/sleep somewhere to avoid slamming the server? A file I transferred was 3000ish packets, so I can't sleep very long on each loop and get anything done. Also, all of those packets seem to transfer pretty much fine. It was the final EOF packet that's problematic if anything.
Thanks,
xhonzi