Its a same file, I want to remove the duplicates from the file & want to keep that in the same file 10_FMS_CRXtoFMS.csv.
Is there something wrong or how i can use then.
OK. Let's forget about the ssh complications and go back to basics. Essentially, you have the command:
This reads the file 10_FMS_CRXtoFMS.csv and adds the lines in that file that had different values in the 3rd field to the end of the file. It does not throw away the original contents of the file.
If you change the command to:
you will empty the file named 10_FMS_CRXtoFMS.csv and then add any unique 3rd column values to the file (but since you emptied the file before calling awk, there aren't any lines in the file and you end up with an empty file).
Even if it did do what you thought it was doing, you still wouldn't want to do that. If your awk script fails for some reason, you will destroy your input file and have no backup. The safer way to handle something like this is:
This writes the results to a temporary file and then moves the temporary file back to your original file's name if and only if awk completed successfully. (If awk fails, you will have the diagnostic messages awk prints, your unchanged input file, and the results awk produced before it failed in the temp file to debug the problem and fix it without losing any data. Using $$ in the file name allows you to sue the script to concurrently process other files without them interfering with each other. In POSIX conforming shells, $$ expands to the process ID of the shell creating the file.)
There are other issues to consider (and other ways to do this safely) if your input file has multiple hard links, but I'm assuming that isn't an issue for now.
I have an interactive script which works terrific at processing a folder of unsorted files into new directories.
I am wondering how I could modify my script so that( upon execution) it provides an additional labelled summary file on my desktop that lists all of the files in each directory that... (4 Replies)
Hi ,
My requirement is to scan a directory for file names with LTR.PDF*
and send those files via ftp to another server one by one.
Now the the problem is file names are like LTR.PDF ,LTR.PDF1 ,LTR.PDF2.....LTR.PDF10..upto 99
and these needs to be sent in sorted order.
is there a way to get... (10 Replies)
Hi
I have a requirement like below
I need to sort the files based on the timestamp in the file name and run them in sorted order and then archive all the files which are one day old to temp directory
My files looks like this
PGABOLTXML1D_201108121235.xml... (1 Reply)
Hi All,
How can I print the sorted results of the following expression in Perl ??
print "$i\t$h{$i}\n";
I tried
print (sort ("$i\t$h{$i}")"\n"); and other variations of the same but failed.
Can someone suggest how to solve this problem, as I'm tryin print sorted results of my script, which... (11 Replies)
Say i have 2 files in the giving format:
file1
1 2 3 4
1 2 3 4
1 2 3 4
file2
1 2 3 4
1 2 3 4
1 2 3 4
I have a PERL code (loaned by one of u -i forgot who - thanks!) that extracts the 2nd column from each file and append horizontally to a new file:
perl -ane 'push @{$L->}, $F; close... (1 Reply)
Hi all,
please give me the commands using which i can compare 2 sorted files and get the difference in third file, indiating where the difference is from either file1 or file2.
as:
File1 (Original file)
GARRY
JOHN
JULIE
SAM
---------------
File2
DEV
GARRY
JOHN
JOHNIEE (7 Replies)
Hi,
I am trying to make a script that creates a list of all active (alive) processes sorted by size and then print this list on screen.
Could anyone help me?
Thaks a lot (7 Replies)