Hi everybody:
Could anybody tell me how I can delete repeated rows from a file?, this is, for exemple I have a file like this:
0.490 958.73 281.85 6.67985 0.002481
0.490 954.833 283.991 8.73019 0.002471
0.590 950.504 286.241 6.61451 0.002461
0.690 939.323 286.112 6.16451 0.00246
0.790... (8 Replies)
Hi
I have a file having 1000 rows. Now I would like to remove 10 rows from it. Plz give me the script.
Eg:
input file like
4 1 4500.0 1
5 1 1.0 30
6 1 1.0 4500
7 1 4.0 730
7 2 500000.0 730
8 1 785460.0 45
8 7 94255.0 30
9 1 31800.0 30
9 4 36000.0 30
10 1 15000.0 30... (5 Replies)
I have a file content like below.
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""... (5 Replies)
Hello to all members,
I am very new in unix stuff (shell scripting), but a want to learn a lot. I am a ex windows user but now i am absolutely Linux super user... :D
So i am tryng to made a function to do this:
I have two csv files only with numbers, the first one a have:
1
2
3
4
5... (6 Replies)
I need to delete rows based on the number of lines in a different file, I have a piece of code with me working but when I merge with my C application, it doesnt work.
sed '1,'\"`wc -l < /tmp/fileyyyy`\"'d' /tmp/fileA > /tmp/filexxxx
Can anyone give me an alternate solution for the above (2 Replies)
I have a Unix file with 200,000 records, and need to remove all records from the file that have the character ‘I' in position 68 (68 bytes from the left). I have searched for similar problems and it appears that it would be possible with sed, awk or perl but I do not know enough about any of these... (7 Replies)
Hi all,
I have a big file (about 6 millions rows) and I have to delete same occurrences, stored in a small file (about 9000 rews). I have tried this:
while read line
do
grep -v $line big_file > ok_file.tmp
mv ok_file.tmp big_file
done < small_file
It works, but is very slow.
How... (2 Replies)
Hi All,
I am new to UNIX . Please help me in writing code to delete all records from the file where all columns after cloumn 5 in file is either 0, #MI or NULL.
Initial 5 columns are string
e.g.
"alsod" "1FEV2" "wjwroe" " wsse" "hd3" 1 2 34 #Mi
"malasl" "wses" "trwwwe" " wsse" "hd3" 1 2 0... (4 Replies)
Discussion started by: alok2082
4 Replies
LEARN ABOUT PHP
sybase_affected_rows
SYBASE_AFFECTED_ROWS(3)SYBASE_AFFECTED_ROWS(3)sybase_affected_rows - Gets number of affected rows in last querySYNOPSIS
int sybase_affected_rows ([resource $link_identifier])
DESCRIPTION sybase_affected_rows(3) returns the number of rows affected by the last INSERT, UPDATE or DELETE query on the server associated with the
specified link identifier.
This command is not effective for SELECT statements, only on statements which modify records. To retrieve the number of rows returned from
a SELECT, use sybase_num_rows(3).
PARAMETERS
o $link_identifier
- If the link identifier isn't specified, the last opened link is assumed.
RETURN VALUES
Returns the number of affected rows, as an integer.
EXAMPLES
Example #1
Delete-Query
<?php
/* connect to database */
sybase_connect('SYBASE', '', '') or
die("Could not connect");
sybase_select_db("db");
sybase_query("DELETE FROM sometable WHERE id < 10");
printf("Records deleted: %d
", sybase_affected_rows());
?>
The above example will output:
Records deleted: 10
SEE ALSO sybase_num_rows(3).
PHP Documentation Group SYBASE_AFFECTED_ROWS(3)