Hi All,
I have multiple files and i need to segregate unique and duplicates into files.
Eg: /source/ -- path
abc_12092016.csv
abc_11092016.csv
abc_12092016.csv
abc_11092016.csv
in the source folder it may be 2 files today and tomorrow 3 files etc..
from each file the unique and duplictes hvae to be segregated and loaded
result--
abc_12092016.csv
abc_11092016.csv
abc_12092016_dup.csv
abc_11092016_dup.csv
a script will do fine....
I have a few to questions pose in response first:-
Is this homework/assignment? There are specific forums for these.
What have you tried so far?
What output/errors do you get?
What OS and version are you using?
What are your preferred tools? (C, shell, perl, awk, etc.)
What logical process have you considered? (to help steer us to follow what you are trying to achieve)
Most importantly, What have you tried so far?
There are probably many ways to achieve most tasks, so giving us an idea of your style and thoughts will help us guide you to an answer most suitable to you so you can adjust it to suit your needs in future.
We're all here to learn and getting the relevant information will help us all.
Additionally, please wrap code, files, input & output/errors in CODE tags, like this:-
Quote:
[CODE]This is my code[/CODE]
to produce the following (fixed character width, space respected):-
Not only does it make posts far easier to read, but CODE and ICODE sections respect multiple space and have fixed width characters, which is important for easily seeing input/output requirements. i have added some to your post. I hope I have guessed correctly.
Hello Team,
I need your help on the following:
My input file a.txt is as below:
3330690|373846|108471
3330690|373846|108471
0640829|459725|100001
0640829|459725|100001
3330690|373847|108471
Here row 1 and row 2 of column 1 are identical but corresponding column 2 value are... (4 Replies)
I would like to print unique lines without sort or unique. Unfortunately the server I am working on does not have sort or unique. I have not been able to contact the administrator of the server to ask him to add it for several weeks. (7 Replies)
I have 84 files with the following names splitseqs.1, spliseqs.2 etc.
and I want to change the .number to a unique filename.
E.g.
change splitseqs.1 into splitseqs.7114_1#24
and
change spliseqs.2 into splitseqs.7067_2#4
So all the current file names are unique, so are the new file names.... (1 Reply)
Hi guys,
I am trying to identify the number of duplicate entries in a string inputed by the user. Here is a command I use:
$ user_input="M T T"
$echo "${user_input}" | awk '{for(i=0;i<=NF;i++) print $i }'| sort | uniq -d
The above works fine for string with multiple letters. The problem is... (2 Replies)
I have values in the variable as so the for loop just fetches one by one
params=$'$HEW_SRC_DATABASE_LIB\nprmAttunityUser\nprmAttunityPwd\nprmODBCDataSource\nprmLoadInd\nprmSrc_Lib_ATM\nprmODBCDataSource_ATM'
and i have a grep command like this
ret=`grep \$y $pf`
... (0 Replies)
Hi,
I have a file in the below format.,
test test (10)
to to (25)
see see (45)
and i need the output in the format of
test 10
to 25
see 45
Some one help me? (6 Replies)
Hi,
How to eliminate the duplicate values in unix? I have a excel file which contains duplicate values.
Need to use this in a script.
Thanks in advance. (3 Replies)
I have an archive file that holds a batch of statements. I would like to be able to extract a certain statement based on the unique customer # (ie. 123456). The end for each statement is noted by "ENDSTM".
I can find the line number for the beginning of the statement section with sed.
... (5 Replies)
I have input file like below.
I00789524 0213 5212
D00789524 0213 5212
I00778787 2154 5412
The first two records are same(Duplicates) except I & D in the first character. I want non duplicates(ie. 3rd line) to be output. How can we get this . Can you help. Is there any single AWK or SED... (3 Replies)