You can try to use awk.
Create the following awk script uniq.awk :
Code:
/^end:/ {
if (! (Record in Records)) {
Records[Record];
print RecordLabel ":";
print Record;
print $0;
Record = "";
}
next;
}
$1 ~ /^.*:/ {
sub(/:.*/, "", $1);
RecordLabel = $1;
next;
}
{
Record = (Record ? Record "\n" : "") $0;
}
and execute it :
Code:
$ awk -f uniq.awk filename
record1:
this is testing
my id is 2001
end:
record2:
this is testing2
my id is 2002
end:
record3:
this is testing
my id is 2002
end:
$
hi all,
i have a file contain multicolumns, this file is sorted by col2 and col3.
i want to remove the duplicated columns if the col2 and col3 are the same in another line.
example
fileA
AA BB CC DD
CC XX CC DD
BB CC ZZ FF
DD FF HH HH
the output is
AA BB CC DD
BB CC ZZ FF... (6 Replies)
Hi guys,
i have a big file with the following format.This includes header(H),detail(D) and trailer(T) information in the file.My problem is i have to search for the character "6h" at 14 th and 15 th position in all the records .if it is there i have to write all those records into a... (1 Reply)
Hi
I have an xml file which has multiple xml records..
I don't know how to read those records and pipe them to another shell command
the file is like
<abc>z<def>y<ghi>x........</ghi></def></abc> (1st record)
<jkl>z<mno>y<pqr>x........</pqr></mno></jkl> (2nd record)
Each record end... (4 Replies)
I have an xml file:
<AutoData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Table1>
<Data1 10 </Data1>
<Data2 20 </Data2>
<Data3 40 </Data3>
<Table1>
</AutoData>
and I have to remove the portion xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" only.
I tried using sed... (10 Replies)
find pattern, delete line with pattern and 3 lines above and 8 lines below the pattern. The pattern is "isup". The entire record with starting tag <record> and ending tag </record> containing the pattern is to be deleted and the rest to be retained.
<record>
... (4 Replies)
Dear all,
How can I remove duplicated column in a text file?
Input:
LG10_PM_map_19_LEnd 1000560 G AA AA AA AA AA GG
LG10_PM_map_19_LEnd 1005621 G GG GG GG AA AA GG
LG10_PM_map_19_LEnd 1011214 A AA AA AA AA GG GG
LG10_PM_map_19_LEnd 1011673 T TT TT TT TT CC CC... (1 Reply)
Hi Gurus,
I need to remove duplicate line in file and update TRAILER (last line) record count. the file is comma delimited, field 2 is key to identify duplicated record.
I can use below command to remove duplicated. but don't know how to replace last line 2nd field to new count.
awk -F","... (11 Replies)
Discussion started by: green_k
11 Replies
LEARN ABOUT DEBIAN
jifty::dbi::record::cachable
Jifty::DBI::Record::Cachable(3pm) User Contributed Perl Documentation Jifty::DBI::Record::Cachable(3pm)NAME
Jifty::DBI::Record::Cachable - records with caching behavior
SYNOPSIS
package Myrecord;
use base qw/Jifty::DBI::Record::Cachable/;
DESCRIPTION
This module subclasses the main Jifty::DBI::Record package to add a caching layer.
The public interface remains the same, except that records which have been loaded in the last few seconds may be reused by subsequent fetch
or load methods without retrieving them from the database.
METHODS
flush_cache
This class method flushes the _global_ Jifty::DBI::Record::Cachable cache. All caches are immediately expired.
_flush_key_cache
Blow away this record type's key cache
load_from_hash
Overrides the implementation from Jifty::DBI::Record to add caching.
load_by_cols
Overrides the implementation from Jifty::DBI::Record to add caching.
_cache_config
You can override this method to change the duration of the caching from the default of 5 seconds.
For example, to cache records for up to 30 seconds, add the following method to your class:
sub _cache_config {
{ 'cache_for_sec' => 30 }
}
AUTHOR
Matt Knopp <mhat@netlag.com>
SEE ALSO
Jifty::DBI, Jifty::DBI::Record
perl v5.14.2 2010-09-21 Jifty::DBI::Record::Cachable(3pm)