Sponsored Content
Top Forums UNIX for Beginners Questions & Answers Filtering based on column values Post 302998250 by RudiC on Sunday 28th of May 2017 04:02:24 AM
Old 05-28-2017
You MUST be kidding! Incredible! That file does NOT contain any SF value other than 0 or 1 :


Code:
awk '{print $5, $6, $7}' FS="[,;=]" file
  
SF 0 1
SF 0 1
SF 0 1
SF 0 1
SF 0 1
SF 0 1
SF 0 VRT
SF 0 1
SF 0 1
SF 0 1
SF 0 1
SF 0 1
SF 1 VRT
SF 0 1
SF 0 VRT
SF 1 VRT
SF 0 1
SF 1 VRT
SF 0 1
SF 0 1
SF 0 1
SF 1 VRT
SF 1 VRT
SF 0 1
SF 0 1
SF 0 1
SF 0 1
SF 1 VRT
SF 0 1
SF 1 VRT
SF 0 1
SF 0 1
SF 0 1
SF 0 1
SF 1 VRT
SF 0 1
SF 0 1
SF 0 1
SF 1 VRT
SF 0 1
SF 0 VRT
SF 0 1
SF 0 VRT
SF 0 VRT
SF 0 1
SF 0 1
SF 0 1
SF 0 1
SF 1 VRT

No surprise you have an empty output on all proposals! Unbelievable!
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Filtering records of a file based on a value of a column

Hi all, I would like to extract records of a file based on a condition. The file contains 47 fields, and I would like to extract only those records that match a certain value in one of the columns, e.g. COL1 COL2 COL3 ............... COL47 1 XX 45 ... (4 Replies)
Discussion started by: risk_sly
4 Replies

2. Shell Programming and Scripting

How to pick values from column based on key values by usin AWK

Dear Guyz:) I have 2 different input files like this. I would like to pick the values or letters from the inputfile2 based on inputfile1 keys (A,F,N,X,Z). I have done similar task by using awk but in that case the inputfiles are similar like in inputfile2 (all keys in 1st column and values in... (16 Replies)
Discussion started by: repinementer
16 Replies

3. Shell Programming and Scripting

How to averaging column based on first column values

Hello I have file that consist of 2 columns of millions of entries timestamp and throughput I want to find the average (throughput ) for each equal timestamp before change it to proper format e.g : i want to average 2 coloumnd fot all 1308154800 values in column 1 and then print... (4 Replies)
Discussion started by: aadel
4 Replies

4. Shell Programming and Scripting

Perl: filtering lines based on duplicate values in a column

Hi I have a file like this. I need to eliminate lines with first column having the same value 10 times. 13 18 1 + chromosome 1, 122638287 AGAGTATGGTCGCGGTTG 13 18 1 + chromosome 1, 128904080 AGAGTATGGTCGCGGTTG 13 18 1 - chromosome 14, 13627938 CAACCGCGACCATACTCT 13 18 1 + chromosome 1,... (5 Replies)
Discussion started by: polsum
5 Replies

5. Shell Programming and Scripting

Filtering lines for column elements based on corresponding counts in another column

Hi, I have a file like this ACC 2 2 21 aaa AC 443 3 22 aaa GCT 76 1 33 xxx TCG 34 2 33 aaa ACGT 33 1 22 ggg TTC 99 3 44 wee CCA 33 2 33 ggg AAC 1 3 55 ddd TTG 10 1 22 ddd TTGC 98 3 22 ddd GCT 23 1 21 sds GTC 23 4 32 sds ACGT 32 2 33 vvv CGT 11 2 33 eee CCC 87 2 44... (1 Reply)
Discussion started by: polsum
1 Replies

6. Shell Programming and Scripting

Adding values of a column based on another column

Hello, I have a data such as this: ENSGALG00000000189 329 G A 4 2 0 ENSGALG00000000189 518 T C 5 1 0 ENSGALG00000000189 1104 G A 5 1 0 ENSGALG00000000187 3687 G T 5 1 0 ENSGALG00000000187 4533 A T 4 2 0 ENSGALG00000000233 5811 T C 4 2 0 ENSGALG00000000233 5998 C A 5 1 0 I want to... (3 Replies)
Discussion started by: Homa
3 Replies

7. Shell Programming and Scripting

Sum column values based in common identifier in 1st column.

Hi, I have a table to be imported for R as matrix or data.frame but I first need to edit it because I've got several lines with the same identifier (1st column), so I want to sum the each column (2nd -nth) of each identifier (1st column) The input is for example, after sorted: K00001 1 1 4 3... (8 Replies)
Discussion started by: sargotrons
8 Replies

8. UNIX for Dummies Questions & Answers

Repositioning based on column values

Dear all ... I have a file which I want to change the structure based on the values in some columns and I would be grateful if you can help... one of my files looks like ... they all have ten rows 1,0,0 10,0,0 2,0,0 3,0,0 4,1,1 4,1,1 4,1,1 5,0,0 6,0,0 7,0,0 8,0.5,2 9,0.33,3 9,0.33,3... (1 Reply)
Discussion started by: A-V
1 Replies

9. Shell Programming and Scripting

Concatenate values in the first column based on the second column.

I have a file (myfile.txt) with contents like this: 1.txt apple is 3.txt apple is 5.txt apple is 2.txt apple is a 7.txt apple is a 8.txt apple is a fruit 4.txt orange not a fruit 6.txt zero isThe above file is already sorted using this command: sort -k2 myfile.txtMy objective is to get... (3 Replies)
Discussion started by: shoaibjameel123
3 Replies

10. UNIX for Beginners Questions & Answers

Filtering records of a csv file based on a value of a column

Hi, I tried filtering the records in a csv file using "awk" command listed below. awk -F"~" '$4 ~ /Active/{print }' inputfile > outputfile The output always has all the entries. The same command worked for different users from one of the forum links. content of file I was... (3 Replies)
Discussion started by: sunilmudikonda
3 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 09:59 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy