Hello there,
I am trying to write a perl script which helps me to delete duplicate records from a text file. The problem is, that the duplicate records are not 'completely' duplicate only in specific fields of the record...
Sample DATA of text file:
__DATA__
2131123 677778 152707011...
I have tried it with printf and sprintf and it is working so far.. I can select fields and read the input from a file.
Thanks rharsh!
open(FILE, "my.data") or die("Unable to open file");
@data = <FILE>;
close(FILE);
foreach ($data) {
my $temp = split(/\|/,$_);
my $firstfield = sprintf...
Thanks for the response.
The input is from a file, the row is in reality over 5 times longer than in the sample.
I need to be able to pick out some fields, and add a specific rule to the field (add/delete spaces, leading/following zeros etc.)
Simplified Example:
| |0|-|0|151| |...
I have a very big text file in the following format; each field is seperated with a "|". Every row represents a new entry.
| |0|-|0|151| | |0|7611920|2|300|0|0|0|6830|1|16|21302|0|
| |2|-|1|0| | |0|7919120|21|400|0|0|0|6661|1|81|121|0|
..
I need to transform the file in the following...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.