Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations wOOdy-Soft on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Find how memory that are locked by program

Status
Not open for further replies.

r2d22y

Technical User
Nov 30, 2004
46
SE
Hello

Is there a way to trace how much memory a PERL program is consuming when its running.

I want to see what happens during the program is executing and how the memory is increasing when the program is running.

I have done "top" (linux) but then I cant se when the memoryconsuming part is.

Regards/D_S

 
I would call 'ps' in a loop and specify the process number of the perl script, something like this. (displaying my very rusty shell skills here....)

{
[tab]while true
[tab]do
[tab][tab]ps # and add option to look just at one process
[tab][tab]sleep 1
[tab]done
} > /tmp/ps_output

Mike

I am not inscrutable. [orientalbow]

Want great answers to your Tek-Tips questions? Have a look at faq219-2884

 
you'd need to type 'M' (see man top for more details) within [tt]top[/tt] to see memory usage.

Out of interest, why is it an issue? Perl is pretty good with memory management if you adopt sound programming practices. Are you worried about a choice of algorithm or approach?

Yours,

fish

["]As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.["]
--Maur
 
With the Tk module installed on Linux, you could add something like this to get the info from /proc/$pid/status:

Code:
#!/usr/bin/perl -w
use strict;
my ($pid,$used,%bigvar,$count);
if ($ARGV[0]){$pid=$ARGV[0]}
else {$pid=$$}
open(FILE,"/proc/$pid/status")or die "Failed to find my /proc/$$/status: $!\n";
while(<FILE>){next unless /^VmSize:/; s/.*: +//;$used=$_;}
use Tk;
my $mw=tkinit;
my $memory=$mw->Entry(-textvariable => \$used)->pack;
$mw->repeat(1000,\&update_used);
$mw->repeat(100,\&do_something);
$count=0;
MainLoop;
sub update_used
  {
  open(FILE,"/proc/$pid/status") or die "Failed to find my /proc/$$/status: $!\n";
  while(<FILE>) {next unless /^VmSize:/; s/.*: +//;$used=$_;}
  }
sub do_something
  { for(my $c=0;$c<1000;$c++) {$bigvar{$count}=$count;$count++} }

Of course, without something to delete elements of %bigvar, this will continue to use up memory until it runs out =)
 
Why I need to know is because I'm doing a module to existing application that will generate statistics. I'm storing selected values (from Mysql) in a couple of hashtables and makes some calculations....BUT it consumes 150 Mb of memory (!!!) if I'm summarizing for a month between 00:00-24:00. This is really not acceptable I dont do any unecesary loops,as I know, so I would like to see where this top (150 Mb) is in my program.

Why I stored all values in an hashtable was because it seemed to take less time for all calculations compared to making alot of selectstatements but what I have won in time I seems to have lost in memoryconsumation.


What do recommend me to do (algoritm/approach) when there are a lot of data to take care of. Is it a good idea to store data in hashtables or is there a more effective approach

Thanks for your help...

/D_S


 
The experience that I can compare to your problem is mine of extracting data from web logs hourly for statistical purposes. Simply looping thru each line (or record from select), and counting whatever matches in various accumulators worked. My experience was of roughly 10gb of compressed access logs per day, so processing time was paramount to make sure it didn't meet coming and going.

while(get_record_from_select())
{
$count++;
if (/from: someone/){$msg_from_someone++}
if (/red/) {$red++}
if (/etc/) {$etc++}
}

Minimize the number of variables that stay around holding data when they don't have to. It does depend on what statistics you're keeping and how complex they are.
 
Kordaff is correct - there are usually techniques that alow you to process the data as it arrives rather than reading it all into a hash and processing the hash. We'd need to know more about what you're attempting to help further.

Yours,

fish

[&quot;]As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.[&quot;]
--Maur
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top