Sponsored Content
Full Discussion: get 3rd column of nth line
Top Forums Shell Programming and Scripting get 3rd column of nth line Post 302481969 by gc_sw on Monday 20th of December 2010 09:58:23 AM
Old 12-20-2010
Power

ok this is my whole data:
Code:
Log start: 101220-171509 - 192.168.1.1 - gc_sw.log

PORTAREA> list maxPorts

101220-17:15:19 10.14.3.49 7.1j  stopfile=/tmp/21632
=================================================================================================================
PORT                                                      Attribute         Value
=================================================================================================================
RbsLocalCell=S2C1                                       maxPortIP  4
RbsLocalCell=S3C1                                       maxPortIP  4
RbsLocalCell=S1C1                                       maxPortIP  4
=================================================================================================================
Total: 3 results

there is a === before 8th line but neither nawk '/\===/{exit} NR>8{print $3}' file nor nawk 'NR>8{print $3} /\===/{exit}' file gave result Smilie
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to extract 3rd line 4th column of a file

Hi, Shell script: I would need help on How to extract 3rd line 4th column of a file with single liner Thanks in advance. (4 Replies)
Discussion started by: krishnamurthig
4 Replies

2. Shell Programming and Scripting

Awk multiple lines with 3rd column onto a single line?

I have a H U G E file with over 1million entries in it. Looks something like this: USER0001|DEVICE001|VAR1 USER0001|DEVICE001|VAR2 USER0001|DEVICE001|VAR3 USER0001|DEVICE001|VAR4 USER0001|DEVICE001|VAR5 USER0001|DEVICE001|VAR6 USER0001|DEVICE002|VAR1 USER0001|DEVICE002|VAR2... (4 Replies)
Discussion started by: SoMoney
4 Replies

3. Shell Programming and Scripting

grep data on 2nd line and 3rd column

How do I grep/check the on-hand value on the second line of show_prod script below? In this case it's a "3". So if it's > 0, then run_this, otherwise, quit. > ./show_prod Product Status Onhand Price shoe OK 3 1.1 (6 Replies)
Discussion started by: joker_789us
6 Replies

4. Shell Programming and Scripting

1st column,2nd column on first line 3rd,4th on second line ect...

I need to take one column of data and put it into the following format: 1st line,2nd line 3rd line,4th line 5th line,6th line ... Thanks! (6 Replies)
Discussion started by: batcho
6 Replies

5. Shell Programming and Scripting

Using AWK to find top Nth values in Nth column

I have an awk script to find the maximum value of the 2nd column of a 2 column datafile, but I need to find the top 5 maximum values of the 2nd column. Here is the script that works for the maximum value. awk 'BEGIN { subjectmax=$1 ; max=0} $2 >= max {subjectmax=$1 ; max=$2} END {print... (3 Replies)
Discussion started by: ncwxpanther
3 Replies

6. Shell Programming and Scripting

Calculating average for every Nth line in the Nth column

Is there an awk script that can easily perform the following operation? I have a data file that is in the format of 1944-12,5.6 1945-01,9.8 1945-02,6.7 1945-03,9.3 1945-04,5.9 1945-05,0.7 1945-06,0.0 1945-07,0.0 1945-08,0.0 1945-09,0.0 1945-10,0.2 1945-11,10.5 1945-12,22.3... (3 Replies)
Discussion started by: ncwxpanther
3 Replies

7. Shell Programming and Scripting

AWK script to create max value of 3rd column, grouping by first column

Hi, I need an awk script (or whatever shell-construct) that would take data like below and get the max value of 3 column, when grouping by the 1st column. clientname,day-of-month,max-users ----------------------------------- client1,20120610,5 client2,20120610,2 client3,20120610,7... (3 Replies)
Discussion started by: ckmehta
3 Replies

8. Shell Programming and Scripting

awk to search for specific line and replace nth column

I need to be able to search for a string in the first column and if that string exists than replace the nth column with "-9.99". AW12000012012 2.38 1.51 3.01 1.66 0.90 0.91 1.22 0.82 0.57 1.67 2.31 3.63 0.00 AW12000012013 1.52 0.90 1.20 1.34 1.21 0.67 ... (14 Replies)
Discussion started by: ncwxpanther
14 Replies

9. Shell Programming and Scripting

Taking nth column and putting its value in n+1 column using awk

Hello Members, Need your expert opinion how to tackle below. I have an input file that looks like below: USS|AWCC|AFGAW|93|70 USSAA|Roshan TDCA|AFGTD|93|72,79 ALB|Vodafone|ALBVF|355|69 ALGEE|Wataniya (Nedjma)|DZAWT|213|50,550 I like output file in below format: ... (7 Replies)
Discussion started by: umarsatti
7 Replies

10. Shell Programming and Scripting

Replace Value of nth Column of Each Line Using Array

Hello All, I am writing a shell script with following requirement: 1. I have one input file as below CHE01,A,MSC,INO CHE02,B,NST,INC CHE03,C,STM,INP 2. In shell script I have predefined array as below: Array1={A, B, C} Array2= {U09, C04, A054} (6 Replies)
Discussion started by: angshuman
6 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 06:48 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy