04-25-2013
Can we assume that CDC_FLAG will always be the 3rd field, SRC_PMTN_I will always be the 5th field, and CDC_PRCS_TS will always be the 1st field; or do we have to match the strings against the header line to determine which fields to use?
Do the output records need to be in the same order as they appeared in the input file or can the output be in random order except that the 1st output line must be the 1st input line (the headings)?
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi All,
I needs to fetch unique records based on a keycolumn(ie., first column1) and also I needs to get the records which are having max value on column2 in sorted manner... and duplicates have to store in another output file.
Input :
Input.txt
1234,0,x
1234,1,y
5678,10,z
9999,10,k... (7 Replies)
Discussion started by: kmsekhar
7 Replies
2. Shell Programming and Scripting
Hi,
I am unable to search the duplicates in a file based on the 1st,2nd,4th,5th columns in a file and also remove the duplicates in the same file.
Source filename: Filename.csv
"1","ccc","information","5000","temp","concept","new"
"1","ddd","information","6000","temp","concept","new"... (2 Replies)
Discussion started by: onesuri
2 Replies
3. Shell Programming and Scripting
Given a file such as this I need to remove the duplicates.
00060011 PAUL BOWSTEIN ad_waq3_921_20100826_010517.txt
00060011 PAUL BOWSTEIN ad_waq3_921_20100827_010528.txt
0624-01 RUT CORPORATION ad_sade3_10_20100827_010528.txt
0624-01 RUT CORPORATION ... (13 Replies)
Discussion started by: script_op2a
13 Replies
4. Shell Programming and Scripting
Hi team,
I have 20 columns csv files. i want to find the duplicates in that file based on the column1 column10 column4 column6 coulnn8 coulunm2 . if those columns have same values . then it should be a duplicate record.
can one help me on finding the duplicates,
Thanks in advance.
... (2 Replies)
Discussion started by: baskivs
2 Replies
5. Shell Programming and Scripting
Hi All ,
I have a requirement where I need to remove duplicates from a fixed width file which has multiple key columns .Also , need to capture the duplicate records into another file .
File has 8 columns.
Key columns are col1 and col2.
Col1 has the length of 8 col 2 has the length of 3.
... (5 Replies)
Discussion started by: saj
5 Replies
6. Shell Programming and Scripting
Hi,
I have file named file1.txt with below contents
cat file1.txt
1/29/2014 0:00,706886
1/30/2014 0:00,791265
1/31/2014 0:00,987087
2/1/2014 0:00,1098572
2/2/2014 0:00,572477
2/3/2014 0:00,701715
I want to display as below
1/29/2014,706886
1/30/2014,791265
1/31/2014,987087... (5 Replies)
Discussion started by: villain41
5 Replies
7. UNIX for Dummies Questions & Answers
I have requirement to print latest record from file based on multiple columns combination.
EWAPE EW1SLE0000 EW1SOMU01 ABORTED 03/16/2015 100004 03/16/2015 100005 001
EWAPE EW1SLE0000 EW1SOMU01 ABORTED 03/18/2015 140003 03/18/2015 140004 001
EWAPE EW1SLE0000 EW1SOMU01 ABORTED 03/18/2015 220006... (1 Reply)
Discussion started by: tmalik79
1 Replies
8. UNIX for Beginners Questions & Answers
I have /tmp dir with filename as:
010020001_S-FOR-Sort-SYEXC_20160229_2212101.marker
010020001_S-FOR-Sort-SYEXC_20160229_2212102.marker
010020001-S-XOR-Sort-SYEXC_20160229_2212104.marker
010020001-S-XOR-Sort-SYEXC_20160229_2212105.marker
010020001_S-ZOR-Sort-SYEXC_20160229_2212106.marker... (4 Replies)
Discussion started by: gnnsprapa
4 Replies
9. Shell Programming and Scripting
Hi Experts,
Please bear with me, i need help
I am learning AWk and stuck up in one issue.
First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique.
Second point : For... (1 Reply)
Discussion started by: as7951
1 Replies
10. UNIX for Beginners Questions & Answers
Hello all,
I need to filter a dataframe composed of several columns of data to remove the duplicates according to one of the columns. I did it with pandas. In the main time, I need that the last column that contains all different data ( not redundant) is conserved in the output like this:
A ... (5 Replies)
Discussion started by: pedro88
5 Replies
LEARN ABOUT SUSE
net::jabber::key
Net::Jabber::Key(3) User Contributed Perl Documentation Net::Jabber::Key(3)
NAME
Net::Jabber::Key - Jabber Key Library
SYNOPSIS
Net::Jabber::Key is a module that provides a developer easy access
to generating, caching, and comparing keys.
DESCRIPTION
Key.pm is a helper module for the Net::Jabber::Transport. When the
Transport talks to a Client it sends a key and expects to get that
key back from the Client. This module provides an API to generate,
cache, and then compare the key send from the Client.
Basic Functions
$Key = new Net::Jabber::Key();
$key = $Key->Generate();
$key = $Key->Create("bob@jabber.org");
$test = $Key->Compare("bob@jabber.org","some key");
METHODS
Basic Functions
new(debug=>string, - creates the Key object. debug should
debugfh=>FileHandle, be set to the path for the debug
debuglevel=>integer) log to be written. If set to "stdout"
then the debug will go there. Also, you
can specify a filehandle that already
exists and use that. debuglevel controls
the amount of debug. 0 is none, 1 is
normal, 2 is all.
Generate() - returns a key in Digest SHA1 form based on the current
time and the PID.
Create(cacheString) - generates a key and caches it with the key
of cacheString. Create returns the key.
Compare(cacheString, - compares the key stored in the cache under
keyString) cacheString with the keyString. Returns 1
if they match, and 0 otherwise.
AUTHOR
By Ryan Eatmon in May of 2000 for http://jabber.org.
COPYRIGHT
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl v5.12.1 2004-08-17 Net::Jabber::Key(3)