When i run those script individually as Don Cragun wrote, it can work well. But when i try to put them in my whole script, like this:
assume that before those code above, i have to input file of $first, $second and define a filename for $result.
Do you have another way to figure it out?
How about to remove duplicate lines? I have tried using this code, but i think something's missing.
Thanks in advance.
Regards,
Intan
In addition to what Scrutinizer and RudiC have already pointed out...
There is a HUGE difference between $result.csv and result.csv unless somewhere in your script you also had a shell assignment statement like: result=result
If you want our help debugging your script, you need to show us your script! (Not just bits and pieces that work fine when you run them separately, but don't work in your whole script.) Most of us aren't very good at guessing what:
and:
actually expand to in your script, but it obviously makes a huge difference in what your script will do.
Are you trying to remove all but the 1st line in your file for each unique field one value, and hoping that sort -u will do that for you? Or, are you trying to remove all but the 1st line in your file for each unique field one value and also need to sort the output?
What operating system (including release numbers) and shell are you using? Do you need to be able to run this script only on that operating system, or are you trying to write portable code that will work on any UNIX or Linux system?
If you have a problem to solve, stop and think about what the problem is. Describe the entire problem. Describe your inputs. Describe your desired outputs.
Piecewise refinement is great when you've got a big problem to solve, but if you don't know your end target when you start, a lot of those pieces may be wasted since they won't lead to your final goal.
Please help us help you. Tell us in detail, about your inputs, your outputs, and the code you've tried to get to your goal.
I am doing KSH script to remove duplicate lines in a file. Let say the file has format below.
FileA
1253-6856
3101-4011
1827-1356
1822-1157
1822-1157
1000-1410
1000-1410
1822-1231
1822-1231
3101-4011
1822-1157
1822-1231
and I want to simply it with no duplicate line as file... (5 Replies)
I have a log file "logreport" that contains several lines as seen below:
04:20:00 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
06:38:08 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but responded to ping
07:11:05 /usr/lib/snmp/snmpdx: Agent snmpd appeared dead but... (18 Replies)
Hi please help me how to remove duplicate lines in any file.
I have a file having huge number of lines.
i want to remove selected lines in it.
And also if there exists duplicate lines, I want to delete the rest & just keep one of them.
Please help me with any unix commands or even fortran... (7 Replies)
How do we sort and remove duplicate on column 1,2 retaining the record with maximum date (in feild 3) for the file with following format.
aaa|1234|2010-12-31
aaa|1234|2010-11-10
bbb|345|2011-01-01
ccc|346|2011-02-01
bbb|345|2011-03-10
aaa|1234|2010-01-01
Required Output
... (5 Replies)
greetings,
i'm hoping there is a way to cat a file, remove duplicate lines and send that output to a new file. the file will always vary but be something similar to this:
please keep in mind that the above could be eight occurrences of each hostname or it might simply have another four of an... (2 Replies)
Hey guys, need some help to fix this script. I am trying to remove all the duplicate lines in this file.
I wrote the following script, but does not work. What is the problem?
The output file should only contain five lines:
Later! (5 Replies)
hi,
Please help me to write a command to delete duplicate lines from a file. And the size of file is 50 MB. How to remove duplicate lins from such a big file. (6 Replies)
I have a text file which has blank lines. I want them to be removed before upload it to DB using SQL *Loader. Below is the command line, i use to remove blank lines.
sed '/^ *$/d' /loc/test.txt
If i use the below command to replace the file after removing the blank lines, it replace the... (6 Replies)
I have a csv file that I would like to remove duplicate lines based on field 1 and sort. I don't care about any of the other fields but I still wanna keep there data intact. I was thinking I could do something like this but I have no idea how to print the full line with this. Please show any method... (8 Replies)
Hi,
I have a csv file which contains some millions of lines in it.
The first line(Header) repeats at every 50000th line. I want to remove all the duplicate headers from the second occurance(should not remove the first line).
I don't want to use any pattern from the Header as I have some... (7 Replies)