This will show the sum of hits on all lines in the input file:
This would show it per line:
Hi,
Thanks for your reply.
Testing it out gives below:
Using nawk instead of awk gives 1 instead of 0. Shouldn't tolower work as well with awk?
Also, testing it with the oracle tnsping command.
- Then I tried to grep for Host and expecting to get 2 but got 0 (zero) instead. I tried using nawk, it gives 0 (zero) as well. Any ideas?
- BTW, FYI kinda hoping to be able to parse the tnsping output and be able to reference each value as a shell variable, i.e. for example, echo $load_balance to give off, that's for another post I guess. For the moment, just want to be able to count for a string occurremce.
Right, I forgot you needed it to be case insensitive.
I know, no big help but it worked for me:
For parsing the other output with "Host", you should use something like this, because the word doesn't stand alone separated by awk's field separator:
I have some text files in a folder f1 with 10 columns. The first five columns of a file are shown below.
aab abb 263-455 263 455
aab abb 263-455 263 455
aab abb 263-455 263 455
bbb abb 26-455 26 455
bbb abb 26-455 26 455
bbb aka 264-266 264 266
bga bga 230-232 230 ... (10 Replies)
I have 500 text files in a folder. The data of the text files are shown below.
USA Germany 23-12
USA Germany 23-12
USA Germany 23-12
France Germany 15-12
France Germany 15-12
France Italy 25-50
China China 30-32
China China 30-32
I would... (1 Reply)
Based on the forums i have tried with grep command but i am unable to get the required output.
search this value /*------
If that is found then search for temp_vul and print
and also search until /*------- and print new_vul
Input file contains:
... (5 Replies)
Hi,
I have a text file as shown below. I would like to count the unique number of connections of each person in the first and second column. Third column is the ID numbers of first column persons and fourth column is the ID numbers of second column persons.
susan ali 156 294... (7 Replies)
Hi
Im a very inexperienced bioinformatician
I have a large DNA file with about 10000 lines of sequence and need to count the occurrence of TA for each line
for example in the file
TACGCGCGATA
TATATATA
GGCGCGTATA
I would like to get an output like:
2
4
2
I have tried... (3 Replies)
Can anyone help me to count number of occurrence of the strings based on column value. Say i have 300 files with 1000 record length from which i need to count the number of occurrence string which is existing from 213 to 219. Some may be unique and some may be repeated. (8 Replies)
Hi,
I have the following text in a file:
ISA*00* *00* *ZZ*ENS_EDI *ZZ*GATE0215 *110106*2244*U*00401*006224402*1*P*>~
GS*HP*ENS_EDI*GATE0215*20110106*2244*6224402*X*004010X091A1~
ST*835*00006~... (2 Replies)
I have a sorted file like:
Apple 3
Apple 5
Apple 8
Banana 2
Banana 3
Grape 31
Orange 7
Orange 13
I'd like to search $1 and if $1 is not the same as $1 in the previous row print that row and print the number of times $1 was found.
so the output would look like:
Apple 8 3
Banana... (2 Replies)
I use grep -c often, but cannot for the life of me count the number of occurences of a string on the same line (or within a file):
$ cat myfile
hello457903485897hello
34329048hellojsdfkljlaskdjgh182390
$ grep -c
2
$
How do I count the number of occurences of "hello" in myfile (i.e. 3)?... (6 Replies)