Sponsored Content
Top Forums Shell Programming and Scripting Cutting fields from lines with multiple spaces Post 302553145 by 915086731 on Tuesday 6th of September 2011 11:55:29 PM
Old 09-07-2011
However, awk can not print the content between $2 and $3.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Cutting the top two lines, and also charachters below.

Hey all. I have a file that I am trying to cut information out of. We have a script that shows us all of our Radio Scanners that are being used and I'm writing a script that clears all of the context off of the scanners. The script that runs shows us this information below... |emp_id ... (5 Replies)
Discussion started by: jalge2
5 Replies

2. UNIX for Dummies Questions & Answers

Need help cutting a couple of fields from log file.

Data begins as such; Mar 16 03:27:05 afzpimdn01 named: denied update from .3983 for "nnn.nnn.in-addr.arpa" IN Mar 16 03:27:37 afzpimdn01 named: denied update from .4844 for "nnn.nnn.in-addr.arpa" IN Mar 16 03:27:56 afzpimdn01 named: denied update from .2650 for "nnn.nnn.in-addr.arpa" IN ... (4 Replies)
Discussion started by: altamaha
4 Replies

3. Shell Programming and Scripting

cutting fields and doing trim in awk

hi, I have this code: #!/usr/bin/ksh #SERVICES gzcat *SERVICES* | awk ' { SUBSCRIBERNUMBER=substr($0,1,20) SUBSCRIBERNUMBER=trim(SUBSCRIBERNUMBER) SERVICECODE=substr($0,22,61) SERVICECODE=trim(SERVICECODE) STARTDD=substr($0,63,72) STARTDD=trim(STARTDD) STARTDT=substr($0,74,81)... (1 Reply)
Discussion started by: naoseionome
1 Replies

4. Shell Programming and Scripting

join on a file with multiple lines, fields

I've looked at the join command which is able to perform what I need on two rows with a common field, however if I have more than two rows I need to join all of them. Thus I have one file with multiple rows to be joined on an index number: 1 randomtext1 2 rtext2 2 rtext3 3 rtext4 3 rtext5... (5 Replies)
Discussion started by: crimper
5 Replies

5. Shell Programming and Scripting

cutting lines

Dear All, Is there a way to cut the lines that have been "head" Here is what i m trying to do Please advice there is file name dummy.txt now i am trying to head this file 4 time by using a loop and every time this file is head with different values e.g in first instance it will... (7 Replies)
Discussion started by: jojo123
7 Replies

6. AIX

missing blank spaces while cutting the file to individual files

$ indicates blank space file1.txt: 001_AHaris$$$$$020$$$$$$$$$ 001_ATony$$$$$$030$$$$$$$$$ 002_AChris$$$$$090$$$$$$$$$ 002_ASmit$$$$$$060$$$$$$$$$ 003_AJhon$$$$$$001$$$$$$$$$ $ indicates blank space code while read "LINE"; do echo "$LINE" | cut -c6- >> $(echo "$LINE" | cut... (1 Reply)
Discussion started by: techmoris
1 Replies

7. Shell Programming and Scripting

bash script too many fields wraps to multiple lines

Hello. I'm trying to write a script to take a 5 field file, do some math, and extend it to 9 fields. Problem is, the script keeps wrapping it to two lines, even tho 9 fields, tab separated (even comma separated) doesn't fill the screen. Even if it did, I'm eventually copying it to an excel ... (2 Replies)
Discussion started by: JoeNess
2 Replies

8. Shell Programming and Scripting

replacing a quote in some lines with multiple quote fields

i want to replace mistaken quotes in line starting with tag 300 and relocate the quote in the correct position so the input is 223;25 224;20100428064823;1;0;0;0;0;0;0;0;8;1;3;9697;18744;;;;;;;;;;;; 300;X;Event:... (3 Replies)
Discussion started by: wradwan
3 Replies

9. UNIX for Dummies Questions & Answers

cutting multiple columns into multiple files

Hypothetically, suppose that file1 id v1 v2 v3 v4 v5 v6 v7..........v100 1 1 1 1 1 1 2 2 .....50 2 1 1 1 1 1 2 2 .....50 3 1 1 1 1 1 2 2 .....50 4 1 1 1 1 1 2 2 .....50 5 1 1 1 1 1 2 2 .....50 I want to write a loop such that I take the id# and the first 5 columns (v1-v5) into the... (3 Replies)
Discussion started by: johnkim0806
3 Replies

10. Shell Programming and Scripting

Awk: Combine multiple lines based on number of fields

If a file has following kind of data, comma delimited 1,2,3,4 1 1 1,2,3,4 1,2 2 2,3,4 My required output must have only 4 columns with comma delimited 1,2,3,4 111,2,3,4 1,222,3,4 I have tried many awk command using ORS="" but couldnt progress (10 Replies)
Discussion started by: mdkm
10 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 11:46 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy