Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

system(), exec(), fork() or sudo 3

Status
Not open for further replies.

1DMF

Programmer
Joined
Jan 18, 2005
Messages
8,795
Location
GB
I've been researching the possibility of running another script from within a script, and needing admin credentials for the second script/process

i've found a few methods... system(), exec(), fork(), but am unsure of the user account these commands run as, i'm assuming it's the same user account as the parent calling script. (IUSR)

I have then found a module on CPAN 'Sudo', which enables you to execute another command as a different user.

is this the way to go? do any of the other methods enable 'execute as different user' and what are the security issues?

any advice on the best way to go is appreciated.

PaulTEG has suggested that the first script could create a file, I then set up a scheduled task to run the second script at a given time which can process the created file data.

would this be the best and safest approach?

thanks 1DMF

"In complete darkness we are all the same, only our knowledge and wisdom separates us, don't let your eyes deceive you.
 
IMO, yes, but I'm biased ;-)

Paul
------------------------------------
Spend an hour a week on CPAN, helps cure all known programming ailments ;-)
 
The Sudo module is only going to work on *nix systems - I don't know if that's a concern.

Paul's suggestion is perfectly reasonable (and far easier to write). The main thing is that you'll have to be certain that the first script has finished writing the file before the second processes it, or you could run into difficulties. For that reason, it's probably necessary to have some sort of time delay between the two scripts. If this is acceptable for what you're doing, then I'd say to go for that.
 
You could write to seperate files in the one directory, and lock them until complete, and the daemon processes all files, and in the event of a locked file, it politely waits to proceed

Paul
------------------------------------
Spend an hour a week on CPAN, helps cure all known programming ailments ;-)
 
The Sudo module is only going to work on *nix systems - I don't know if that's a concern.
i could have swarn that the latest version was windows compatible, but i may have got confused with reading about fork() or system() it was late last night and i'm not the best for absorbing things first read!!! I have enough difficulty understanding what they are saying, let alone remembering it :-)


I think the scheduled task script is the easiest way forward, I was already concidering creating separate files in a directory and doing a 'glob then process'.

I take it if I open a file handle with > (create) and issue flock , the other script will not process the file and skip it, is this correct?



"In complete darkness we are all the same, only our knowledge and wisdom separates us, don't let your eyes deceive you.
 
I take it if I open a file handle with > (create) and issue flock , the other script will not process the file and skip it, is this correct?
That's a common misconception that's VERY, VERY wrong.

This code is frequently used incorrectly:
Code:
use Fcntl qw(:DEFAULT :flock);
open ( FH '>filename' ) or die "cannot open filename: $!";
flock(FH, LOCK_EX) or die "cannot lock filename: $!";
# . . . and now write to the file

The problem is that the call to "open" causes the file to be clobbered BEFORE you attempt to get the file lock. This is obviously not something that you want to happen.

This is the suggested version (source: Programming Perl 3rd ed)
Code:
use Fcntl qw(:DEFAULT :flock);
sysopen(FH, "filename", O_WRONLY | O_CREAT) or die "can't open filename: $!";
flock(FH, LOCK_EX) or die "can't lock filename: $!";
truncate(FH,0) or die "can't truncate filename: $!";
# now write to FH
The difference is that the file isn't emptied until the "truncate" function is called, which is now after the lock is achieved.
 
don't empty the file, move it into a 'processed' directory, delete nothing, NOTHING I tells ya

Paul
------------------------------------
Spend an hour a week on CPAN, helps cure all known programming ailments ;-)
 
Still confused ishnid, here was my concept.

User fills in sign up form, form is validated (JS) then submitted to first script, data validated again , just incase JS isn't switched on, why? who knows, creates extra work, but hey, not like I have better things to do - yeah right!

The script after validating creates a unique file id of (time & IP Address) and puts in the 'new_users' folder with the following method...

Code:
#write to file     
open(USERFILE, ">$file"); 
flock(USERFILE, 2);
seek(USERFILE, 0, 0);
print USERFILE "$mydata";
close(USERFILE);

exits....

second script (scheduled).... does the following...
globs the folder into an array for all existing files, loops array opening file, checking file has data in it, processes data (create web user account with Win32::OLE - now I got that sorted :-) ), deletes file, sends user account creating notification email with login details.

the data is going to be a pipe delimited single line string holding, 'userid|password|fullname|email' nothing else!

I expect perl to be able to create a file of such miniscule amount in milliseconds, the amount of traffic creating these files is negligable.

What is the real likely hood that there will be a conflict?

I can also add some error detection that fires me an email and i can manually log in and create the account if needs be.

maybe i'm using a sledge hammer to crack a walnut, but I really can't be hastled with periodic user account login creation, it's for a B2B website, so it's not like joe public will signing up every 5 minutes!

do you still see a flaw in my logic?






"In complete darkness we are all the same, only our knowledge and wisdom separates us, don't let your eyes deceive you.
 
Sounds like you are creating different files for each user instead of appending lines to a single file? if so you should use a single, separate semaphore file to lock everything until the first script finishes. That way all files can be processed without the latest file getting missed. The first script locks the semaphore file, second script starts on schedule, has to wait until first script is done and unlocks semaphore file, then second script locks semaphore file, processes, unlocks.

semaphore file serves no other purpose than traffic cop(has no data), that saves you from checking locks on each and every file. also don't have to worry about clobbering semaphore file
This is what I plan to do to a script I use that has no locking at all right now -has me worried!
 
So you have a file called "lock_file.txt" that is zero bytes, you issue an open and check, then flock and check, then if all ok get on with creating real file.

the second scheduled script first tries to open lock_file.txt if failed , gives up, if ok locks lock_file, if ok processes all other files in folder when finished, unlocks lock_file.txt

ok sound like a good plan, I could create a module that I call to lock and unlock returning true/false that both scripts use to help code re-use.

one thing I see you post was
has to wait until first script is done and unlocks semaphore file
did you mean you would do a
Code:
while !&lock_sub {sleep 1;}
rather than a
Code:
if(!&lock_sub){exit();}
seings as the scheduled job will run every 10 minutes, the likely hood of ity being blocked continuosly is very very slim indeed.

Though this method could cause problems with sign up requests, i take it I don't lock the semaphore file when the processing script runs, as this will hinder account sign ups and I would definately have to issue a sleep command and people might get impatient waiting for their browser to respond!

What would you do.

"In complete darkness we are all the same, only our knowledge and wisdom separates us, don't let your eyes deceive you.
 
Someone correct me if I'm wrong, but don't scripts normally wait until they get a lock unless you override that behaviour?
I'm going to look again right now.
I find the whole flock thing confusing too
 
yes i was right, flock stops your scripts process until it can get a lock. So the other script waits till the locking script releases the lock, but shouldn't die, just pause.

Yes I would lock the semaphore file with both scripts or else you could have corrupted data.

Does your processing script run real slow?
You really only need to lock the semaphore file during the section of the processing script that accesses and deletes the files. Of course that is probably the whole script now that I think about it. Duh.
 
if flock waits till it can achieve lock then that's sweet!

i doubt my processing script will run that slow, though the WIN32::OLE bit is a tad slow on creating the windows accounts.

with this mechanism i'm going to lock the lock_file do a glob then release lock_file, other files can be created after that process it won't matter because they won't be in the @files array created by glob, so the processing script won't try to access the other files that may be being created.

It really will be a millisecond process getting a glob listing of files to process, i think this mechanism should work well.

thanks for the input.

"In complete darkness we are all the same, only our knowledge and wisdom separates us, don't let your eyes deceive you.
 
If you want you can add something like this to ease fears of impatient users (adapted from perl cookbook 7.11)

Code:
unless (flock(FH, LOCK_EX|LOCK_NB)) {
    print "Please wait, processing your information";
    unless (flock(FH, LOCK_EX)) {
           print "Sorry, please hit your back button and re-submit. A problem has occurred";
        die "can't get write-lock on numfile: $!";
    }
}

The LOCK_NB lets your script continue forward if lock fails instead of waiting. The second attempt is to ensure you get a real lock
 
I doubt I need something quite so heavy, the processing script will have the lock_file for milliseconds, which won't affect the user experience in the slightest.

"In complete darkness we are all the same, only our knowledge and wisdom separates us, don't let your eyes deceive you.
 
You are deleting the files after processing right?
Make sure you don't delete any new ones created during processing time. Sounds like you have a low volume so that bug could take a long time to actually delete someone's unprocessed file
 
You are deleting the files after processing right?

yes, but how would i delete new ones? I will be using the same @files array to delete as I used to process and the only files in @files (glob) are old files not new ones?



"In complete darkness we are all the same, only our knowledge and wisdom separates us, don't let your eyes deceive you.
 
Oops found a bug in my script:
Code:
#write to file     
open(BRAIN, ">$left_ear");
flock(BRAIN, 2); #lock BRAIN
print BRAIN "$relevant_part_of_your_post";
flock (BRAIN, 8); #unlock BRAIN
close(BRAIN);

BRAIN is a special FILEHANDLE that should never be LOCKED ;)
 
doesn't close(FH) unlock the file anyway?

i've finished my processing script but am confused about the flock waiting bit, if you use
Code:
flock(FH, 2) || die "can't lock file";

if it can't lock the file doesn't the die take effect? otherwise you would have a script that potentially would hang.

if not what makes flock wait and what makes the || take effect?

here is my code..
Code:
#!/usr/bin/perl

######################
# Set Error Trapping #
######################

use CGI::Carp qw(fatalsToBrowser); 
use warnings;
use strict;

# Set Module location
use lib qw(mypath);

##################
# Use Win32::OLE #
##################
use Win32::OLE 'in';
$Win32::OLE::Warn = 3;

########################
# Use File Lock Module #
########################
use file_lock;

############
# Set Vars #
############
use constant UF_PASSWORD_CANT_CHANGE => 0x40;
use constant UF_DONT_EXPIRE_PASSWD => 0x10000;

#################
# Set Lock File #
#################
if(!&lock_file($lockfile)){&error("Create User - Unable to create lock on [ $lockfile ]")};

######################
# Get new user files #
######################
my @users = glob($new_users . '/*');

####################
# Release lockfile #
####################
if(!&rel_file){
    &error("Create User - Unable to release lock on [ $lockfile ]");
}

####################################
# Loop and create Windows accounts #
####################################
    
foreach my $file (@users){

        # Open file
        open(USERFILE, "<$file") || &error("Unable to open user file [ $file ]"); 

        while (<USERFILE>) {
            my ($userid,$pword,$fullname,$email) = split(/\|/, $_);
            if(!$userid || !$pword || !$fullname || !$email){
                &error("User file [ $file ] corrupt!");
            }
            # Create parent object.
            my $objParent = Win32::OLE->GetObject("WinNT://$server")  or &error("Unable to create PARENT object"); 

            # Create user object.
            my $objUser = $objParent->Create("user", "$userid") or &error("Unable to create USER object"); 

            # Assign user details
            $objUser->SetInfo;
            $objUser->SetPassword("$pword");
            $objUser->{AccountDisabled} = "FALSE";
            $objUser->{Description} = "$description";
            $objUser->{FullName} = "$fullname";

            # **********************************************************************
            # Make sure the password never expires and the user can't change it
            # **********************************************************************
            my $userflags = UF_DONT_EXPIRE_PASSWD + UF_PASSWORD_CANT_CHANGE;
            $objUser->Put('userFlags', $userflags);
            $objUser->SetInfo;

            # Get group Object
            my $objGroup = Win32::OLE->GetObject("WinNT://$server/$group") or &error("Unable to get group object");

            # Add user to group
            $objGroup->Add("WinNT://$server/$userid");
        }                    
        close(USERFILE);

        # Delete file
        unlink($file) || &error("Delete of file [ $file ] failed.");
}

exit();
and the file_lock module...
Code:
######################
# Set Error Trapping #
######################

use CGI::Carp qw(fatalsToBrowser warningsToBrowser); 
use warnings;
use strict;

##########################
# Set Package Name Space #
##########################
package file_lock;

################
# Start Module #
################

BEGIN {

	# Invoke Exporter
	use Exporter;

	# Set Variables
	our (@ISA, @EXPORT);
	@ISA = qw(Exporter);

	# Define global vars and subs to be exported
	@EXPORT = qw( &lock_file &rel_file );

}


########################################################################
########################### GLOBAL SUBROUTINES #########################
########################################################################


#################################################
############## Lock File Routine ################
#################################################

sub lock_file {

#_0 = file including path

    open(LOCKFILE, "<$_[0]") || return 0;
    flock(LOCKFILE, 2) || return 0;
    
    1;

}

#################################################
############## Release File Routine #############
#################################################

sub rel_file {

    flock(LOCKFILE, 8) || return 0;
    close(LOCKFILE) || return 0;

    1;

}

###########################
# END OF MODULE RETURN 1; #
###########################
1;
will this do the trick? or can anyone spot a flaw in my code? it runs ok in a test environment.

i've missed out some var defs in my post for obvious reasons and the &error routine will simply email me the error passed to it.

"In complete darkness we are all the same, only our knowledge and wisdom separates us, don't let your eyes deceive you.
 
doesn't close(FH) unlock the file anyway?

Yes.

if it can't lock the file doesn't the die take effect? otherwise you would have a script that potentially would hang.

if not what makes flock wait and what makes the || take effect?
Programming Perl hints that scripts can wait indefinitely for a lock, maybe the || die happens when some other really bad condition exists.


The script looks good to me, but you may want to test if it really stops the other script by locking the file, sleeping long time, running the script to add user to see if it hangs for the sleep period waiting for a lock and then finishes

better idea- write a script that uses file_lock module and sleeps. try both scripts against that
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top