Sponsored Content
Top Forums Shell Programming and Scripting awk - Division with condition Post 302762895 by jacobs.smith on Tuesday 29th of January 2013 08:11:22 AM
Old 01-29-2013
awk - Division with condition

Hi Friends,

I have an input file like this

Code:
cat input
chr1 100 200 1 2
chr1 120 130 na 1
chr1 140 160 1 na
chr1 170 180 na na
chr1 190 220 0 0
chr1 220 230 nd 1
chr2 330 400 1 nd
chr2 410 450 nd nd
chr3 500 700 1 1

I want to calculate the division of 4th and 5th columns. But, if either of the columns has
Code:
na or nd or 0

, I want my output to be nf.

So, my output would be

Code:
cat output
chr1 100 200 1 2 0.5
chr1 120 130 na 1 nf
chr1 140 160 1 na nf
chr1 170 180 na na nf
chr1 190 220 0 0 nf
chr1 220 230 nd 1 nf
chr2 330 400 1 nd nf
chr2 410 450 nd nd nf
chr3 500 700 1 1 1


I tried the division already, but it it throwing an error saying fatal error, division by zero attempted.

Thanks in advance.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Division by zero error message in AWK

How can I modify my awk code to get rid of the divion by zero error message? If I run the script without an input file, it should return error message "Input file missing" but not divison by zero. Code: #!/bin/nawk -f BEGIN { if (NR == 0) {print "Input file... (4 Replies)
Discussion started by: Pauline mugisha
4 Replies

2. Shell Programming and Scripting

awk Division and modulus

I need to read the file divide 3 column with 2nd and run a modulus of 10 and check whether the remainder is zero or not if not print the entire line. cat filename | awk '{ if ($3 / $2 % 10 != 0) print $0}' Whats wrong with it ? (4 Replies)
Discussion started by: dinjo_jo
4 Replies

3. Shell Programming and Scripting

awk error message: division by zero attempted

Hi, I'm executing unixbench tool v4.1 on an embedded system and I'm getting these error messages: Execl Throughput 1 2 3awk: /unixbench/unixbench-4.1.0/pgms/loops.awk:38: (FILENAME=- FNR=4) fatal: division by zero attempted Pipe Throughput 1 2 3 4 5 6 7 8 9 10awk:... (3 Replies)
Discussion started by: rogelio
3 Replies

4. UNIX for Advanced & Expert Users

awk: division by zero

I received error "awk: division by zero" while executing the following statement. SunOS 5.10 Generic_142900-15 sun4us sparc FJSV,GPUZC-M echo 8 | awk 'END {printf ("%d\n",NR/$1 + 0.5);}' file1.lst awk: division by zero Can someone provide solution? Thanks Please use code... (11 Replies)
Discussion started by: kumar77
11 Replies

5. Shell Programming and Scripting

awk division error - 0

input 0 0 9820373 2069 0 0 11485482 awk '{print ($1/$3) / ($4/$7)}' input error Is there any way to fix this problem ? (25 Replies)
Discussion started by: quincyjones
25 Replies

6. Shell Programming and Scripting

Division problem -Awk

input one two three four 0 0 0 10424 0 102 0 15091 1 298 34 11111 0 10 0 1287 scripts awk 'NR>1{print ($1/$2) / ($3/$4)}' awk 'NR>1{ if ($1 ||$3 ||$2|| $4 == 0) print 0; else print (($1/$2)/($3/$4))}' error awk: division by zero input record number 1, file rm source line... (9 Replies)
Discussion started by: quincyjones
9 Replies

7. Shell Programming and Scripting

awk & division

vmstat|awk '{print $3}'|tail -1 returns 6250511, but what I need is 24416, which is 6250511 divided by 256. Please advise. Thank you so much (2 Replies)
Discussion started by: Daniel Gate
2 Replies

8. Shell Programming and Scripting

variables division with awk

hello i try to divide 2 variables in order to get a percentage--that's why i'm not interested in integer division--but nothing seems to work I think awk is suitable for this but i'm not quite sure how to use it.. any ideas? here's what I want to do: percentage = varA/varB thank you (2 Replies)
Discussion started by: vlm
2 Replies

9. Shell Programming and Scripting

awk Division By Zero Bypass

Hi Friends, I don't understand why "a" is always being printed as zero, when I execute the following command. awk '{if($6||$8||$10||$12==0)a=b=c=d=0;else (a=$5/$6);(b=$7/$8);(c=$9/$10);(d=$11/$12); {print... (6 Replies)
Discussion started by: jacobs.smith
6 Replies

10. Shell Programming and Scripting

awk - continue when encountered division error

Hello, How can I add a logic to awk to tell it to print 0 when encountering a division by zero attempted? Below is the code. Everything in the code works fine except the piece that I want to calculate read/write IO size. I take the kbr / rs and kbw / ws. There are times when the iostat data... (5 Replies)
Discussion started by: tommyd
5 Replies
tabix(1)						       Bioinformatics tools							  tabix(1)

NAME
bgzip - Block compression/decompression utility tabix - Generic indexer for TAB-delimited genome position files SYNOPSIS
bgzip [-cdhB] [-b virtualOffset] [-s size] [file] tabix [-0lf] [-p gff|bed|sam|vcf] [-s seqCol] [-b begCol] [-e endCol] [-S lineSkip] [-c metaChar] in.tab.bgz [region1 [region2 [...]]] DESCRIPTION
Tabix indexes a TAB-delimited genome position file in.tab.bgz and creates an index file in.tab.bgz.tbi when region is absent from the com- mand-line. The input data file must be position sorted and compressed by bgzip which has a gzip(1) like interface. After indexing, tabix is able to quickly retrieve data lines overlapping regions specified in the format "chr:beginPos-endPos". Fast data retrieval also works over network if URI is given as a file name and in this case the index file will be downloaded if it is not present locally. OPTIONS OF TABIX
-p STR Input format for indexing. Valid values are: gff, bed, sam, vcf and psltab. This option should not be applied together with any of -s, -b, -e, -c and -0; it is not used for data retrieval because this setting is stored in the index file. [gff] -s INT Column of sequence name. Option -s, -b, -e, -S, -c and -0 are all stored in the index file and thus not used in data retrieval. [1] -b INT Column of start chromosomal position. [4] -e INT Column of end chromosomal position. The end column can be the same as the start column. [5] -S INT Skip first INT lines in the data file. [0] -c CHAR Skip lines started with character CHAR. [#] -0 Specify that the position in the data file is 0-based (e.g. UCSC files) rather than 1-based. -h Print the header/meta lines. -B The second argument is a BED file. When this option is in use, the input file may not be sorted or indexed. The entire input will be read sequentially. Nonetheless, with this option, the format of the input must be specificed correctly on the command line. -f Force to overwrite the index file if it is present. -l List the sequence names stored in the index file. EXAMPLE
(grep ^"#" in.gff; grep -v ^"#" in.gff | sort -k1,1 -k4,4n) | bgzip > sorted.gff.gz; tabix -p gff sorted.gff.gz; tabix sorted.gff.gz chr1:10,000,000-20,000,000; NOTES
It is straightforward to achieve overlap queries using the standard B-tree index (with or without binning) implemented in all SQL data- bases, or the R-tree index in PostgreSQL and Oracle. But there are still many reasons to use tabix. Firstly, tabix directly works with a lot of widely used TAB-delimited formats such as GFF/GTF and BED. We do not need to design database schema or specialized binary formats. Data do not need to be duplicated in different formats, either. Secondly, tabix works on compressed data files while most SQL databases do not. The GenCode annotation GTF can be compressed down to 4%. Thirdly, tabix is fast. The same indexing algorithm is known to work effi- ciently for an alignment with a few billion short reads. SQL databases probably cannot easily handle data at this scale. Last but not the least, tabix supports remote data retrieval. One can put the data file and the index at an FTP or HTTP server, and other users or even web services will be able to get a slice without downloading the entire file. AUTHOR
Tabix was written by Heng Li. The BGZF library was originally implemented by Bob Handsaker and modified by Heng Li for remote file access and in-memory caching. SEE ALSO
samtools(1) tabix-0.2.0 11 May 2010 tabix(1)
All times are GMT -4. The time now is 11:21 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy