You didn't say what you wanted when a record was missing this version puts *NONE*
Sorry friend. When I work with things, I encounter certain things, so kind of wondering how to do it. Thanks for the update.
If I have more than 4 columns in each file and still the first three columns needs to be matched and the remaining columns needs to be printed, how do I do it?
1.txt
Quote:
chr1 1 2 3 4 5 6
chr2 a b c d e f
chr3 f g h l o p
2.txt
Quote:
chr1 1 2 3 7 8 9
chr2 a b c u j i
chr3 f g h i j k
3.txt
Quote:
chr1 1 2 3 10 11 12
chr2 a b c 3 4 5
chr3 f g h 6 7 8
4.txt
Quote:
chr1 1 2 3 9 8 7
chr2 a b c 0 2 6
chr3 f g h 3 1 2
Output.txt
Quote:
chr1 1 2 3 4 5 6 7 8 9 10 11 12 9 8 7
chr2 a b c d e f u j i 3 4 5 0 2 6
chr3 f g h l o p i j k 6 7 8 3 1 2
Please note that my files are unsorted and not all files have common values. If file one has 100 records, the other one might have only 50 records but some recors will be common.
How will you change the 5th column in the data file with the value in the second column in the error_correction.txt file.
You have to match an extra variable, column 3 of the error_correction file with column 6 of the data.txt file.
data.txt:
vgr,bugatti veron,,3.5,Maybe,6,.......,ax2,....... (0 Replies)
Hi ,
I have a file which has multiple rows of data, i want to match the pattern for two columns and if both conditions satisfied i have to add the counter by 1 and finally print the count value. How to proceed...
I tried in this way...
awk -F, 'BEGIN {cnt = 0} {if $6 == "VLY278" &&... (6 Replies)
I have one single shown below and I need to break each ST|850 & SE to separate file using unix script. Below example should create 3 files. We can use ST & SE to filter as these field names will remain same.
Please advice with the unix code.
ST|850
BEG|PO|1234
LIN|1|23
SE|4
ST|850... (3 Replies)
Hello ,
I have comma delimited file with over 20 fileds that i need to do some validations on. I have to check if certain fields are null and then write the line containing the null field into a new file and then delete the line from the current file.
Can someone tell me how i could go... (2 Replies)
Hi,
I wasn't quite sure how to title this one! Here goes:
I have some already partially parsed log files, which I now need to extract info from. Because of the way they are originally and the fact they have been partially processed already, I can't make any assumptions on the number of... (8 Replies)
Hi All
I am having one awk and sed requirement for the below problem.
I tried multiple options in my sed or awk and right output is not coming out.
Problem Description
###############################################################
I am having a big file say file having repeated... (4 Replies)
Hi,
I have 2 tab-delimited input files as follows.
file1.tab:
green A apple
red B apple
file2.tab:
apple - A;Z
Objective:
Return $1 of file1 if,
. $1 of file2 matches $3 of file1 and,
. any single element (separated by ";") in $3 of file2 is present in $2 of file1
In order to... (3 Replies)
In the awk below I am trying to add a penalty to a score to each matching $1 in file2 based on the sum of $3+$4 (variable TL) from file1. Then the $4 value in file1 is divided by TL and multiplied by 100 (this valvue is variable S). Finally, $2 in file2 - S gives the updated $2 result in file2.... (2 Replies)
In the awk below I am trying to cp and paste each matching line in f2 to $3 in f1 if $2 of f1 is in the line in f2 somewhere. There will always be a match (usually more then 1) and my actual data is much larger (several hundreds of lines) in both f1 and f2. When the line in f2 is pasted to $3 in... (4 Replies)
Discussion started by: cmccabe
4 Replies
LEARN ABOUT DEBIAN
tabix
tabix(1) Bioinformatics tools tabix(1)NAME
bgzip - Block compression/decompression utility
tabix - Generic indexer for TAB-delimited genome position files
SYNOPSIS
bgzip [-cdhB] [-b virtualOffset] [-s size] [file]
tabix [-0lf] [-p gff|bed|sam|vcf] [-s seqCol] [-b begCol] [-e endCol] [-S lineSkip] [-c metaChar] in.tab.bgz [region1 [region2 [...]]]
DESCRIPTION
Tabix indexes a TAB-delimited genome position file in.tab.bgz and creates an index file in.tab.bgz.tbi when region is absent from the com-
mand-line. The input data file must be position sorted and compressed by bgzip which has a gzip(1) like interface. After indexing, tabix is
able to quickly retrieve data lines overlapping regions specified in the format "chr:beginPos-endPos". Fast data retrieval also works over
network if URI is given as a file name and in this case the index file will be downloaded if it is not present locally.
OPTIONS OF TABIX -p STR Input format for indexing. Valid values are: gff, bed, sam, vcf and psltab. This option should not be applied together with any
of -s, -b, -e, -c and -0; it is not used for data retrieval because this setting is stored in the index file. [gff]
-s INT Column of sequence name. Option -s, -b, -e, -S, -c and -0 are all stored in the index file and thus not used in data retrieval.
[1]
-b INT Column of start chromosomal position. [4]
-e INT Column of end chromosomal position. The end column can be the same as the start column. [5]
-S INT Skip first INT lines in the data file. [0]
-c CHAR Skip lines started with character CHAR. [#]
-0 Specify that the position in the data file is 0-based (e.g. UCSC files) rather than 1-based.
-h Print the header/meta lines.
-B The second argument is a BED file. When this option is in use, the input file may not be sorted or indexed. The entire input will
be read sequentially. Nonetheless, with this option, the format of the input must be specificed correctly on the command line.
-f Force to overwrite the index file if it is present.
-l List the sequence names stored in the index file.
EXAMPLE
(grep ^"#" in.gff; grep -v ^"#" in.gff | sort -k1,1 -k4,4n) | bgzip > sorted.gff.gz;
tabix -p gff sorted.gff.gz;
tabix sorted.gff.gz chr1:10,000,000-20,000,000;
NOTES
It is straightforward to achieve overlap queries using the standard B-tree index (with or without binning) implemented in all SQL data-
bases, or the R-tree index in PostgreSQL and Oracle. But there are still many reasons to use tabix. Firstly, tabix directly works with a
lot of widely used TAB-delimited formats such as GFF/GTF and BED. We do not need to design database schema or specialized binary formats.
Data do not need to be duplicated in different formats, either. Secondly, tabix works on compressed data files while most SQL databases do
not. The GenCode annotation GTF can be compressed down to 4%. Thirdly, tabix is fast. The same indexing algorithm is known to work effi-
ciently for an alignment with a few billion short reads. SQL databases probably cannot easily handle data at this scale. Last but not the
least, tabix supports remote data retrieval. One can put the data file and the index at an FTP or HTTP server, and other users or even web
services will be able to get a slice without downloading the entire file.
AUTHOR
Tabix was written by Heng Li. The BGZF library was originally implemented by Bob Handsaker and modified by Heng Li for remote file access
and in-memory caching.
SEE ALSO samtools(1)tabix-0.2.0 11 May 2010 tabix(1)