Sponsored Content
Top Forums Shell Programming and Scripting Getting rip of multiple rows based on column1 Post 302381322 by ichigo on Thursday 17th of December 2009 09:48:48 PM
Old 12-17-2009
Quote:
Originally Posted by scottn
-w is a non-standard option of the uniq command.
Code:
wget -O- -q https://www.unix.com/misc.php?do=cfrules | gawk 'BEGIN{IGNORECASE}!/non-standard/{print "ok";exit}'

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

extract multiple cloumns from multiple files; skip rows and include filenames; awk

Hello, I am trying to write a bash shell script that does the following: 1.Finds all *.txt files within my directory of interest 2. reads each of the files (25 files) one by one (tab-delimited format and have the same data format) 3. skips the first 10 rows of the file 4. extracts and... (4 Replies)
Discussion started by: manishabh
4 Replies

2. UNIX for Dummies Questions & Answers

Converting rows into multiple-rows

Hi every one; I have a file with 22 rows and 13 columns which includes floating numbers. I want to parse the file so that every five columns in the row would be a new record (row). For example, the first line in the old file should be converted into three lines with first two lines contain 5... (6 Replies)
Discussion started by: PHL
6 Replies

3. Shell Programming and Scripting

How to fetch rows based on line numbers or based on the beginning of a word?

I have a file which will have rows like shown below, ST*820*316054716 RMR*IV*11333331009*PO*40.31 REF*IV*22234441009*xsss471-2762 DTM*003*091016 ENT*000006 RMR*IV*2222234444*PO*239.91 REF*IV*1234445451009*LJhjlkhkj471-2762 </SPAN> DTM*003* 091016 RMR*IV*2223344441009*PO*40.31... (18 Replies)
Discussion started by: Muthuraj K
18 Replies

4. Shell Programming and Scripting

Split single rows to multiple rows ..

Hi pls help me out to short out this problem rm PAB113_011.out rm: PAB113_011.out: override protection 644 (yes/no)? n If i give y it remove the file. But i added the rm command as a part of ksh file and i tried to remove the file. Its not removing and the the file prompting as... (7 Replies)
Discussion started by: sri_aue
7 Replies

5. Shell Programming and Scripting

Combining multiple rows in single row based on certain condition using awk or sed

Hi, I'm using AIX(ksh shell). > cat temp.txt "a","b",0 "c",bc",0 "a1","b1",0 "cc","cb",1 "cc","b2",1 "bb","bc",2 I want the output as: "a","b","c","bc","a1","b1" "cc","cb","cc","b2" "bb","bc" I want to combine multiple lines into single line where third column is same. Is... (1 Reply)
Discussion started by: samuelray
1 Replies

6. UNIX for Dummies Questions & Answers

merging rows into new file based on rows and first column

I have 2 files, file01= 7 columns, row unknown (but few) file02= 7 columns, row unknown (but many) now I want to create an output with the first field that is shared in both of them and then subtract the results from the rest of the fields and print there e.g. file 01 James|0|50|25|10|50|30... (1 Reply)
Discussion started by: A-V
1 Replies

7. Shell Programming and Scripting

Create Multiple UNIX Files for Multiple SQL Rows output

Dear All, I am trying to write a Unix Script which fires a sql query. The output of the sql query gives multiple rows. Each row should be saved in a separate Unix File. The number of rows of sql output can be variable. I am able save all the rows in one file but in separate files. Any... (14 Replies)
Discussion started by: Rahul_Bhasin
14 Replies

8. Shell Programming and Scripting

awk to ignore multiple rows based on a condition

All, I have a text file(Inputfile.csv) with millions of rows and 100 columns. Check the sample for 2 columns below. Key,Check A,1 A,2 A, A,4 B,0 B,1 B,2 B,3 B,4 .... million rows. My requirement is to delete all the rows corresponding to all the keys which ever has at least one... (4 Replies)
Discussion started by: ks_reddy
4 Replies

9. Shell Programming and Scripting

Combine multiple rows based on selected column keys

Hello I want to collapse a file with multiple rows into consolidated lines of entries based on selected columns as the 'key'. Example: 1 2 3 Abc def ghi 1 2 3 jkl mno p qrts 6 9 0 mno def Abc 7 8 4 Abc mno mno abc 7 8 9 mno mno abc 7 8 9 mno j k So if columns 1, 2 and 3 are... (6 Replies)
Discussion started by: linuxlearner123
6 Replies

10. Shell Programming and Scripting

Splitting single row into multiple rows based on for every 10 digits of last field of the row

Hi ALL, We have requirement in a file, i have multiple rows. Example below: Input file rows 01,1,102319,0,0,70,26,U,1,331,000000113200000011920000001212 01,1,102319,0,1,80,20,U,1,241,00000059420000006021 I need my output file should be as mentioned below. Last field should split for... (4 Replies)
Discussion started by: kotra
4 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 01:22 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy