It is my first post, hoping to get help from the forum.
In a directory, I have 5000 multiple files that contains around 4000 rows with 10 columns in each file containing a unique string 'AT' located at 4th column.
(Step-1)The bottom of the file needs entire few rows (only with string AT) to be removed ONLY if the 9th column is greater than a value of 0.10 . Then the kept rows in file shall be saved into a new file. An iteration command is required to do it on series of 5000 multiple files.
(Step-2)Next, a program 'calc' will be executed into this multiple new named files one by one. Again, if the 9th column is greater than value 0.10 (only for rows with string AT), then the corresponding row shall be removed from the file. Kept rows shall be renamed into new file.
I have written a short bash code below to execute the program 'calc' to series of multiple files in directory, and so far this small code for linux took me entire day to figure out because I dont have skill in writing any codes.
-------
-------
(Step-3) Finally, every files that contains the same number of lines (ie, 3098, 3095, 3097 etc) shall be saved in single file, accordingly. In this case, from the original 5000 multiple files, the output file expected can be divided for example into:
Thank you so much for your time and attention.
-A
---------- Post updated at 02:08 PM ---------- Previous update was at 11:50 AM ----------
To tackle the problem in each step, first I need to remove matching lines by string and value.
In GNU/Linux x86_64:
The code above says that 4th column with matching string AT, will print into newfile. BUT, I need to tell the script that ONLY if the 9th column has value in between 0.00-0.10 ? How to do that in bash shell ?
Please help.
-A
Last edited by Franklin52; 09-02-2010 at 03:17 AM..
Reason: Please use code tags, thank you!
Hi
I have been struggling with a script for removing duplicate messages from a shared mailbox.
I would like to search for duplicate messages based on the “Message-ID” string within the messages files.
I have managed to find the duplicate “Message-ID” strings and (if I would like) delete... (1 Reply)
I have data like:
Blue Apple 6
Red Apple 7
Yellow Apple 8
Green Banana 2
Purple Banana 8
Orange Pear 11
What I want to do is if $2 in a row is the same as $2 in the previous row remove that row. An identical $2 may exist more than one time.
So the out file would look like:
Blue... (4 Replies)
Hii i have a file with data as shown below. Here i need to remove duplicates of the rows in such a way that
it just checks for 2,3,4,5 column for duplicates.When deleting duplicates,retain largest row i.e with many columns with values should be selected.Then it must remove duplicates such that by... (11 Replies)
Hello Folks..
I need your help ..
here the example of my problem..i know its easy..i don't all the commands in unix to do this especiallly sed...here my string..
dwc2_dfg_ajja_dfhhj_vw_dec2_dfgh_dwq
desired output is..
dwc2_dfg_ajja_dfhhj
it's a simple task with tail... (5 Replies)
Dear masters,
I stuck again in a very tricky situation,so need your valuable inputs. I have a file having rows as below:
_Db
_Database 1023 1 1 1 17.0B 0.2 1.0
_Field
_Field-Name 3 2 2 11 56.2K 64.1 ... (5 Replies)
Hi Folks,
I am new to ksh, i have informatica parameter file that i need to update everyday with shell script. i need your help updating this file with new parameters.
sample data
$$TABLE1_DATE=04-27-2011
$$TABLE2_DATE=04-23-2011
$$TABLE3_DATE=03-19-2011
.......Highligned... (4 Replies)
Gurus,
I am relatively new to Unix scripting and am struck with a problem in my script. I have positional input file which has a FLAG indicator in at position 11 in every record of the file.
If the Flag has value =Y, then the record from the input needs to be written to a new file.However if... (3 Replies)
I am populating an array of string and print it.
But it going in infinite loop and causing segfault.
char Name = {
"yahoo",
"rediff",
"facebook",
NULL
};
main(int argc, char* argv)
{
int j = 0;
... (7 Replies)
I am trying to see if I can use awk to remove duplicates from a file. This is the file:
-==> Listvol <==
deleting /vol/eng_rmd_0941
deleting /vol/eng_rmd_0943
deleting /vol/eng_rmd_0943
deleting /vol/eng_rmd_1006
deleting /vol/eng_rmd_1012
rearrange /vol/eng_rmd_0943
... (6 Replies)
Hi ALL,
We have requirement in a file, i have multiple rows.
Example below:
Input file rows
01,1,102319,0,0,70,26,U,1,331,000000113200000011920000001212
01,1,102319,0,1,80,20,U,1,241,00000059420000006021
I need my output file should be as mentioned below. Last field should split for... (4 Replies)