Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations wOOdy-Soft on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Perl script in crontab vs exporting env variables

Status
Not open for further replies.

Zteev

Programmer
Jul 28, 2003
17
US
Hi people,

I've written a script that ping a mission-critical machine and if there is no response, it sends me a mail. I have scheduled that script in crontab to run every minute.

However, if the machine is down for some hours, the script will still send me a mail every minute, which I don't want!

I thought of exporting a custom env variable (machineup = n) when the machine is found offline (after sending the first and only mail), and for the second execution (before sending the mail), if machineup = n, it won't send the mail, until machineup = y.

I have been told that exporting system env variables isn't possible with Perl. I could only export env variables for the current and child processes.

Is there any way I can do this? Any ideas?

Here is my code :
Code:
#!/usr/bin/perl

#
# basemon.pl
#
#   Monitoring program

use strict;
use warnings;

use Mail::Sendmail;
use Net::Ping;

my $host = "172.17.3.5";
my $p = Net::Ping->new();

if (!($p->ping($host,5))) {
        my %mail = ( To      => 'me@here.com',
                     From    => "$ENV{HOSTNAME}\@smurfit.com",
                     Subject => 'ALERT : Base is down!',
                     Message => " Base is down ..."
                    );
        sendmail(%mail) or die $Mail::Sendmail::error;
}
$p->close();

Thanks in advance for your suggestions,

Steve
 
I generally handle state persistence through ini files. However, I believe there are modules geared specifically toward program state persistence if you think a text file is too neophyte.
 
Steve,

I'd say the bosses are all in Punchestown this week?

Log your downtimes in a logfile, and compare against escalation threshold, of say initial, 5 mins, 15 mins, 30 mins, 1 hour.

By querying the logfile for status you can determine the downtime in your e-mail as well. Also log the first noted uptime, as you'll need this to see if it came up at all.

Code:
1083173014::172.17.3.5::Down
1083173074::172.17.3.5::Up
...

Read the logfile into an array, pop/push results into another array until status is UP. This'll give you the total downtime which you can include in your e-mail, and also a log of downtime for when the BS hits the extraction device.

HTH
--Paul
 
if it runs every minute, why not have it run forever with a sleep loop?
 
Things that run forever have a nasty tendency to eat resources and gobble your uptime.

Thats my opinion, its just one :)
 
if your daemon sleeps 999.999 % of the time, its hard to imagine it would
"eat resources and gobble your uptime." - even less likely if it simplifies your code

In your opinion, does cron every minute consumes more/less/equal resources than
sleeping for a minute?

How many run-forever processes (e.g., bdflush, syslogd, etc ) are running on your system right now?
 
Ok. I can try both. I could make my Perl script a daemon. But I need to know a couple of things first.

What is the best way to parse command line arguments?

Actually, the only accepted argument passed to the script is the configuration file for the hosts the user wants to monitor. I would need a start and a stop argument, to start and stop the process too!

What is the best way to accept -x switches?

Exemple :

./monitor.pl -c /etc/monitor.conf

-c and /etc/monitor.conf are actually two arguments but I don't know what would be the best way to check this in case I would have to add more command line switches....

Any suggestions?

Thanks,

Steve
 
If you use crontab, you don't have to worry about how to shut it down. (Or, for the truly paranoid, you don't need to run another monitor program to check if your monitor daemon is still running...)

I vote for PaulTEG's solution, as it gives you a log of uptimes as well. The script just has to compare the current ping response with the state recorded in the last log entry, and only send an email when it switches state.
 
concerning commandline options check Getopt::Long.

Example
Code:
use Getopt::Long;

my @optionlist; #list containing all options from commandline
my %options=();	#hash with all options captured from commandline

@optionlist = qw(-name=s -amount=s);
GetOptions(\%options, @optionlist);
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top