Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Wanet Telecoms Ltd on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Why is this coding blanking out my data file?

Status
Not open for further replies.

NeilFawcett

Programmer
Mar 19, 2004
30
GB
I was playing around with ways to speed up my file updates.

My original coding was along these lines:-

open LOCK,'>lock.lock';
flock LOCK,2;

open AC,"test.txt";
flock AC,1;
@rec=<AC>;
close AC;

# Process @rec in someway!

open AC,">test.txt";
flock AC,2;
print AC @rec;
close AC;

close LOCK;
unlink('lock.lock');

I tried replacing that loop with this one which runs almost twice as fast...

open AC,"+<test.txt" or open AC,"+>test.txt";
flock AC,2;
seek AC,0,0;
@rec=<AC>;

# Process @rec in someway!

seek AC,0,0;
print AC @rec;
truncate(AC, tell(AC));
close AC;

Now, when I put this coding into practice in a script being use quite heavily, periodically the file would get blanked out. By some debugging it appeared the first open statement was failing (even though the file had previously been created), so it was doing the "OR" condition which goes to the second open statement which creates a new blank file.

I then replaced the open line with:-

if(-e "test.txt"){open DF,"+<test.txt"}else{open AC,"+>test.txt";}

This seems far more stable. Infact where the first coding was nuking the file every 10-20mins, this coding has run for 20hrs now without clearing the file.

Any ideas for this issue?

Also should I put back the "lock.lock" coding around this entire update? The idea is to try and optimise this logic, so if I can get rid of unecessary stuff, good. But I don't want to risk file corruptions!

ps: This is on a Unix box!
 
Hello Neil,

Your new logic looks ok to me. The lock.lock approach shouldn't be needed.

Unless anyone else knows different of course *grin*

Mike

Want to get great answers to your Tek-Tips questions? Have a look at faq219-2884

It's like this; even samurai have teddy bears, and even teddy bears get drunk.
 
This is a common error:
Code:
open AC,">test.txt";
flock AC,2;

The open() function, when called to get write access to a file will clobber that file if it already exists. This means that even if the file is locked (with flock), the above code will delete the file's contents before it checks for a lock.

Look in this entry in perlopentut on file locking for the example with sysopen and truncate.
 
What I'm concerned about is if:-

open AC,"+<test.txt" or open AC,"+>test.txt";

can sometimes think that the file is not there, while infact it is, then why shouldn't my variant also misbehave?:-

if(-e "test.txt"){open DF,"+<test.txt"}else{open AC,"+>test.txt";}


My guess is that (on Unix) when the updated file is being written, for a nano second the old file is deleted before the new one is created. If this "window" is hit then kaboom!

This must surely be what was happening in my first example of coding. Why isn't this the case with the second also then?


Adding the "lock.lock" coding would make it 100% water tight, but as the programs I write are hammered at times, efficiency is a big concern.
 
I don't know - something other than the file not existing is causing your open() to fail sometimes.

You can find out easily enough if you want to. Write a script that repeatedly opens and closes the file reporting any errors it gets.

while(){
open(AC,"+<test.txt") or die "open() failed with\n$!\n";
close(AC);
sleep(1);
}

That should do it, and without putting your server under too much load.


Mike

Want to get great answers to your Tek-Tips questions? Have a look at faq219-2884

It's like this; even samurai have teddy bears, and even teddy bears get drunk.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top