Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Perl: IO:Socket, MTUs, Chunking, Missing Packets, Etc...

Status
Not open for further replies.

xhonzi

Programmer
Jul 29, 2003
196
US
So, I'm a ~ experienced Perl user, but I'm new to using sockets. I've been developing a tcp server & client where the client makes requests (small packets) and the server returns data (lots of it). I was developing both parts on a single machine, so I didn't run into network MTUs until later. (I was transferring strings with a max length of 999999. Ah the vagaries of youth...)

So, I woke up today and decided I would need to chunk my data. I used substr to break my long strings into 1400 (as to be under the MTU) character elements in an array and then I send each line of the array until I'm done. Then I send an effective EOF (separate packet). That seems to work pretty well.

On my client side, it sockets the server when it has a request to make and then goes into a loop to expect the results until it reaches the EOF packet. I have a packet counter on both the client and the server and I log how many are sent and received.

One of the requests is in the repeat and is made every 7 seconds. It usually works. About 1 in 10 doesn't. The server sends all packets, but the client seems to receive all but the EOF packet. I think my logic is sound, since it works most of the time... but when the client never receives the EOF, it goes out to lunch and never comes back.

If I watch a netstat loop on my port, I seem to rack up a lot of TIME_WAIT and CLOSE_WAIT connections. I'm not sure that there's a direct impact, but I think it can't be good.

So: here are the questions... I'm not forking the server on each client connection. Would that help?

The server opens a new socket when first started and then just runs. I never close and reopen it. That seems pretty standard, right?

The client opens a new socket for each request and then sends as many chunks as I have and then closes the socket. At least I think I'm closing it. Does the 'close($socket)' go in the '$client=$socket->accept' routine?

Do I need to wait/sleep somewhere to avoid slamming the server? A file I transferred was 3000ish packets, so I can't sleep very long on each loop and get anything done. Also, all of those packets seem to transfer pretty much fine. It was the final EOF packet that's problematic if anything.

Thanks,
xhonzi
 
I thought the underlying tcp/ip structure handles the chunking of data that you send for MTU reasons. I have sent a lot of data across sockets and never chunked it up manually for any reason.

You can post your code though.. that might help.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[noevil]
Travis - Those who say it cannot be done are usually interrupted by someone else doing it; Give the wrong symptoms, get the wrong solutions;
 
Unfortunately, the machine where I work doesn't connect to the internet, so I can't just copy and paste it. I suppose I could retype it... the relevant parts aren't too long...

I was hoping that IO:Socket would chunk it for me. The books I have mention the maxlength, but never mention putting it above MTU. But as soon as I was passing the files between boxes (and a firewall and half a dozen switched and routers, maybe that's the problem) my strings were showing up all around 1400 characters. I thought MTU and then asked some coworkers who said they thought I had to chunk it manually.

thanks for reading,

Xhonzi
 
I'm not sure now :(.. I will go read up on it and double check.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[noevil]
Travis - Those who say it cannot be done are usually interrupted by someone else doing it; Give the wrong symptoms, get the wrong solutions;
 
server Code:
Code:
use IO::Socket;

my $port = 7098;
my $msg_len=128;
my $too_long=1400;
my @answers;

my $socket = new IO::Socket::INET(
      LocalPort => $port,
      Type      => SOCK_STREAM,
      Reuse     => 1,
      Listen    => 5
   } or &warn_msg("Error Creating Socket: $!");

while ($client = $socket->accept) {
  $client->recv($recv_msg, $msg_len);

  if (defined($recv_msg) && $recv_msg =~ "DMATT" ) {
    &process_msg($recv_msg);  # This calls the subroutine to generate an array called @answers with the chunked up data
    $chunks=1;
    foreach ($answers) {
      &LOG(5, "Sending Chunk: $chunks");
      &LOG(9, "$_\n");
      $chunks++;
      $client->send($_) or &warn_msg("send routine : $@);
    }
    $client->send("DMATT::END") && &LOG(5,"Sending End Message: (Chunk $chunks)") or &warn_msn("send routine : $@");
  }  
}

#close $socket;     #Not sure which I'm supposed to call close on..?
close $client;

sub process_msg {
<blah blah blah>
  $long_answer=<lots of text>.
  @answers=();
  while (length($long_answer) > $too_long) {
    push (@answers, substr($long_answer, 0, $too_long));
    substr($long_answer,0,$too_long)="";
  }
  push (@answers, substr($long_answer, 0));
}
 
client code:
Code:
use IO::Socket;

my $port=7098;
my $msg_len=1400; #This must be >= $too_long in server code


sub sockItToMe { #This is called by various other subroutines omitted (because my hands hurt already)

  my ($recv_msg, $long_answer, $chunks);
  my $send_msg=$_[0];  #get the string to send from calling sub

  $socket = new IO::Socket::INET(
       PeerAddr  => XXX.XX.XX.XX,
       PeerPort  => $port,
       Proto     => 'tcp',
       Type      => SOCK_STREAM,
       Timeoute  => 5
       ) or $mw->messageBox(blah);
  
  $socket->send($send_msg);
  warn "Sending Request ($send_msg) to XXX\n");
  $chunks=0;
  while ($recv_msg new "DMATT::END") {
    $long_answer.=$recv_msg;
    $socket->recv($recv_msg, $msg_len);
    #warn "Received response $recv_msg\n";
    $chunks++;
    warn "Received output chunk: $chunks (", scalar localtime, ")\n";
  }
  close($socket) || warn "Couldn't close socket";
  return $long_answer;
}
 
I guess the server code should have the
Code:
close $client
actually in the loop where $client exists...

xhonzi
 
Okay (and yes I know I'm mostly talking to myself) I moved the close statement up and it looks like my success ratio is a lot higher.

That is to say, I still have a lot of TIME_WAITs stacking up but the "ESTABLISHED" that would never go away seems to have vanished. I am no longer seeing the problem where I don't get my last "DMATT::END" packet even though my server logs that it did send it.

BUT: Now I have this odd little problem where, on approximately the same frequency... the server sends the same number of packets as normal (including the end) but then the client starts "receiving" like crazy. I added a little code to exit once $chunks reaches something "crazy" like 10,000. Any idea what would cause this? I've actually seen the same behaviour when I've CTRL-C'd the server actually during a "send".

Any ideas why I get caught in that loop? By my understanding, I'm only in that loop when
1) the client and server are connected
2) The message isn't DMATT::END
so I'm not sure which situation is getting screwed up.

Here's a generic question:
Should I be connecting and disconnecting my client upon each request/result interchange? It seems natural to, but I wasn't sure if it was designed that I would make one connection for the client and then send all of my messages across that one connection until the GUI client was actually closed.

Thanks for reading,
xhonzi
 
Here's how I do it on the server side
Code:
use IO::Socket;
use strict;

#Set autoflush to true
$|=1;

#needed to stop zombie procs
$SIG{CHLD} = 'IGNORE';

### Socket
my $sock = new IO::Socket::INET (LocalHost => "testmachine",
                               LocalPort => 5000,
                               Listen    => 5,
                               Proto     => 'tcp',
                               Reuse     => 1,
                              );
die "Socket could not be created. Reason: $!\n" unless ($sock);

#Wait for new connections
while (my $new_sock = $sock->accept()) 
{
	$new_sock->autoflush(1);
	
	#Fork off connection so I can listen for more connections
  	my $pid = fork();
	if ($pid == 0) 
	{
		close($sock);
		my @sockarray;	
			
		#Read in buffer
   		while (my $buf = <$new_sock>)
    		{
			#push info into an array
			push @sockarray, $buf;

			#if we get bye we're done
      			if ($buf =~ /bye/)
      			{
				close($new_sock);
      			}
    		}

		#Get rid of new lines
		chomp @sockarray;
		
	
		exit;	
  	}
	close($new_sock);
}

The client connects, send the data, sends bye, then disconnecs. This was written a very long time ago when I was just learing perl (7-8 years) but is has ran flawlessly the whole time.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[noevil]
Travis - Those who say it cannot be done are usually interrupted by someone else doing it; Give the wrong symptoms, get the wrong solutions;
 
So, the fundamental differences are

1) forking with a new client
2) autoflush.
3)?

Care to hazard a guess at which one it is?

xhonzi
 
Okay, so I took a closer look at this and noticed that the packets I'm sending aren't the packets I'm retrieving... I've been reading about packet filling, where basically the network waits until there's a full packet before sending regardless of the chunking I did on my end. I think that's the source of my problem, but I've already set autoflush->1 so I'm not sure what else to do except change my ($msg ne "DMATT:END") to ($msg !=~ /DMATT:END/) or whatever.

If I inserted a 500 millisecond sleep before transmitting the terminating message, will that keep the little packet from transmitting along with the big one?

travs69, did you ever read up on whether perl should chunk the data automatically or not?

thanks again,
xhonzi
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top