Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Wanet Telecoms Ltd on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Removing specific lines from end of file

Status
Not open for further replies.

Windrider

Programmer
Jul 26, 2002
8
FI
Hi,

I would need to remove few lines from end of file and was wondering what would be the best way to do it. Basically the file may contain this at the end, for example:

...
</entry>
</events>

So I would need to remove </entry> and </events> before proceeding and also in the end of file or between those lines there may be line change.

I'm planning to use truncate() -function for that to remove the last line (first getting the address of last line), and then to check if line is beginning with line change or the line matches </entry> or </events>.

Does this sound a good way to remove these lines from file or is there more convenient way?

One would be of course to read the file into memory, remove lines and then write it back, but that may not be feasible, since it eats memory.
 
If I understand you correctly, you want to delete from the last occurance of "</entry>" to the end of the file.
Code:
BEGIN{ ARGV[ARGC++]=ARGV[1] }
NR==FNR { if ($0 ~ /<\/entry>/) last=NR; next }
FNR==last,0 { next }
"Hi, Mom!"
Run with
[tt] awk -f del-last.awk infile >outfile[/tt]
 
Code:
[red]#!/usr/bin/perl[/red]

undef $/;

$_ = <DATA>;

[b]s|</entry>\n</events>||;[/b]

print;

$/ = "\n";

__DATA__
blah
blah blah
blah blah blah
</entry>
</events>


Kind Regards
Duncan
 
eats memory, how big are these files?

I've a feeling that from what you've posted you have additional requirements that you haven't specified. You will get better answers if you spend some time thinking about the questions.

--Paul



cigless ...
 
I totally agree with Paul - I've only ever come across one posting on this site which caused Perl to fall over - and that was a very large DNA sequence... what the heck are you dealing with???


Kind Regards
Duncan
 
Sorry about unclear question - I shouldn't write late at night :)

So, basically I write XML log-file from application for debugging and evaluation purposes. I have made a class interface to do it. It's running with ActivePerl on Windows.

To logfile I can dump Perl data structures with XML::Dumper, which can convert e.g. hashes to XML and back to Perl if needed. Most of the entries are event messages, e.g.
<event time="20050107">Operation completed successfully...</event> etc.

What I'm trying to do is to keep the XML -file wellformed all the time, so that the log could be viewed also on runtime with web-browser (I have XSL -sheet for that).

Example of logfile:

Code:
<?xml version="1.0" encoding="ISO-8859-1" standalone="yes"?> 
<?xml-stylesheet type="text/xsl" href="log_style.xsl"?>
<applog startTime="20050107 18:00.000">
  <function_call function="loadParameters()">
    <log time="18:01.000>Arguments: file="../config/startup_params.xml"</log>
    <log time="18:01.000>Start parsing and loading parameters</log>
    <log time="18:01.000>Parameters loaded successfully</log>
  </function_call>
</applog>

Inside my interface I collect open tags to stack. Since the log should be viewable all the time, the tags in the stack should be removed from the end of the file before new writes, in this case </applog> and </function_call>.

Tags are popped from stack by calling the close function for certain event, e.g. in this case function call is opened by calling:

$object->startFunctionLog();

and closed

$object->endFunctionLog()

I was wondering for the best way to remove those </applog> and </function_call> tags (or what are the open tags in the stack).

One way would be reading the file into memory and to array and then write it back by removing the needed tags but that may not be the optimal way, since if the file is big (many events coming and file growing) it will eat up memory and may not be the best choice.

Not that it would be a problem in modern computers, but for learning purposes I would like to know if there is better/efficient way to do this.
 
...one clarification, the file may eventually grow to several megabytes, even up to 10 - 20 MB.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top