Sponsored Content
Full Discussion: Splitting file using awk
Top Forums Shell Programming and Scripting Splitting file using awk Post 302699097 by anijan on Tuesday 11th of September 2012 06:13:11 AM
Old 09-11-2012
Splitting file using awk

I have file with below content
Code:
FG1620000|20000
FG1623000|23000
FG1625000|25000
FG1643894|43894
FG1643895|43895
FG1643896|43896
FG1643897|43897
FG1643898|43898

My aim is to split the above file into two files based on the value in the second field.
If the value in second field is <=30000, the field one should written into a separate file "high.txt"

"high.txt"
Code:
FG1643894|43894
FG1643895|43895
FG1643896|43896
FG1643897|43897
FG1643898|43898

"low.txt"
Code:
FG1620000|20000
FG1623000|23000
FG1625000|25000

My code is
Code:
cat file| awk -F'|' '{ if ($1 <= "20000") {print $1 >> "high.txt"} else {print $1 >> "low.txt"} }'

but the entire content of the file gets printed to "low.txt"

Where am I going wrong?
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Help with splitting lines in a file using awk

I have a file which is one big long line of text about 10Kb long. Can someone provide a way using awk to introduce carriage returns every 40 chars in this file. Any other solutions would also be welcome. Thank you in advance. (5 Replies)
Discussion started by: martinbarretto
5 Replies

2. Shell Programming and Scripting

splitting tab-delimited file with awk

Hi all, I need help to split a tab-delimited list into separate files by the filename-field. The list is already sorted ascendingly by filename, an example list would look like this; filename001 word1 word2 filename001 word3 word4 filename002 word1 word2 filename002 word3 word4... (4 Replies)
Discussion started by: perkele
4 Replies

3. Shell Programming and Scripting

Splitting a complex file using awk

I have a file that contains the following format delete from table1; delete from table2; insert into table1 (col1, col2) values (value1, value2)@ insert into table1 (col1, col2) values(value3, value4)@ insert into table2(col1, col2,col3) values(value1, value2, value3)@ etc etc This is... (9 Replies)
Discussion started by: hukcjv
9 Replies

4. Shell Programming and Scripting

awk - splitting 1 large file into multiple based on same key records

Hello gurus, I am new to "awk" and trying to break a large file having 4 million records into several output files each having half million but at the same time I want to keep the similar key records in the same output file, not to exist accross the files. e.g. my data is like: Row_Num,... (6 Replies)
Discussion started by: kam66
6 Replies

5. Shell Programming and Scripting

Adding header to sub files after splitting the main file using AWK

Hi Folks, I have a file like: mainfile.txt: ------------- file1 abc def xyz file1 aaa pqr xyz file2 lmn ghi xyz file2 bbb tuv xyz I need output having two files file1 and file2. file1: ------ Name State Country abc def xyz aaa pqr xyz file2: (3 Replies)
Discussion started by: tanmay.gemini
3 Replies

6. Shell Programming and Scripting

awk for splitting file in constant chunks

Hi gurus, I wanted to split main file in 20 files with 2500 lines in each file. My main file conatins total 2500*20 lines. Following awk I made, but it is breaking with error. awk '{ for (i = 1; i <= 20; i++) { starts=2500*$i-1; ends=2500*$i; NR>=starts && NR<=ends {f=My$i".txt"; print >> f;... (10 Replies)
Discussion started by: mukesh.lalwani
10 Replies

7. Shell Programming and Scripting

Splitting the file using awk

Hi, I have a requirement in which I am going to receive one file and should be splitted to 9 different files based on one distinguisher called TYPE. I heard that this can be done using awk or sed. Can any one advise regardint the logic and simpler way other than using awk or sed is also... (15 Replies)
Discussion started by: sagar.cumar
15 Replies

8. Shell Programming and Scripting

Splitting fixed length file using awk

Hi, I need to split a fixed length file of 160 characters based on value of a column. Example: ABC 456780001 DGDG SDFSF BCD 444440002 SSSS TTTTT ABC 777750003 HHHH UUUUU THH 888880001 FFFF LLLLLL HHH 999990002 GGGG OOOOO I need to split this file on basis of column from... (7 Replies)
Discussion started by: Neelkanth
7 Replies

9. Shell Programming and Scripting

Splitting a text file into smaller files with awk, how to create a different name for each new file

Hello, I have some large text files that look like, putrescine Mrv1583 01041713302D 6 5 0 0 0 0 999 V2000 2.0928 -0.2063 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 5.6650 0.2063 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 3.5217 ... (3 Replies)
Discussion started by: LMHmedchem
3 Replies

10. UNIX for Beginners Questions & Answers

awk solution for Splitting a file.

Hi I have a csv file with as below sdg-catalog-00000001 sdg-sku-00000317 sdg-sku-00000318 sdg-sku-00000319 sdg-sku-00000320 sdg-catalog-00000002 sdg-sku-00000321 sdg-sku-00000322 sdg-sku-00000323 sdg-sku-00000324 sdg-sku-00000325 sdg-catalog-00000003 sdg-sku-00000326... (3 Replies)
Discussion started by: Raghuram717
3 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 01:19 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy