Sponsored Content
Top Forums Shell Programming and Scripting how to find third(nth) word in all line from a file Post 302570000 by radoulov on Wednesday 2nd of November 2011 08:09:06 AM
Old 11-02-2011
Is this a homework? Please read the rules!
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Can a shell script pull the first word (or nth word) off each line of a text file?

Greetings. I am struggling with a shell script to make my life simpler, with a number of practical ways in which it could be used. I want to take a standard text file, and pull the 'n'th word from each line such as the first word from a text file. I'm struggling to see how each line can be... (5 Replies)
Discussion started by: tricky
5 Replies

2. Shell Programming and Scripting

How to find and print the last word of each line from a text file

Can any one help us in finding the the last word of each line from a text file and print it. eg: 1st --> aaa bbbb cccc dddd eeee ffff ee 2nd --> aab ered er fdf ere ww ww f the o/p should be a below. ee f (1 Reply)
Discussion started by: naveen_sangam
1 Replies

3. Shell Programming and Scripting

Read last word of nth line

Hi people; i want to read the last word of the 14th line of my file1.txt. Here is the EXACT 14th line of the file. 250 SectorPortnum=3,AuxPortInUngo=2,PortDeviceGroup=1,PortDeviceSet=1,PorDevice=1 20 >>> Set. i have to get the word Set. how can i call it and also how... (3 Replies)
Discussion started by: gc_sw
3 Replies

4. Shell Programming and Scripting

find string nth occurrence in file and print line number

Hi I have requirement to find nth occurrence in a file and capture data from with in lines (between lines) Data in File. <QUOTE> <SESSION> <ATTRIBUTE NAME='Parameter Filename' VALUE='file1.parm'/> <ATTRIBUTE NAME='Service Name' VALUE='None'/> </SESSION> <SESSION> <ATTRIBUTE... (6 Replies)
Discussion started by: tmalik79
6 Replies

5. Shell Programming and Scripting

Using AWK to find top Nth values in Nth column

I have an awk script to find the maximum value of the 2nd column of a 2 column datafile, but I need to find the top 5 maximum values of the 2nd column. Here is the script that works for the maximum value. awk 'BEGIN { subjectmax=$1 ; max=0} $2 >= max {subjectmax=$1 ; max=$2} END {print... (3 Replies)
Discussion started by: ncwxpanther
3 Replies

6. Shell Programming and Scripting

How to read nth word from delimited text file?

Hi i am new in scripting how i can get 2 elements from first line of delimited txt file in shell scripts. AA~101010~0~AB~8000~ABC0~ BB~101011~0~BC~8000~ABC~ CC~101012~0~CD~8000~ABC0~ DD~101013~0~AB~8000~ABC~ AA~101014~0~BC~8000~ABC0~ CC~101015~0~CD~8000~ABC~ can anyone plse help?... (3 Replies)
Discussion started by: sushine11
3 Replies

7. Shell Programming and Scripting

Get the nth word of mth line in a file

Hi.. May be a simple question but I just began to write unix scripts a week ago, for sorting some huge amount of experiment data, so I got no common sense about unix scripting and really need your helps... The situation is, I want to read the nth word of mth line in a file, and then store it... (3 Replies)
Discussion started by: freezelty
3 Replies

8. Shell Programming and Scripting

Find for line with not null values at nth place in pipe delimited file

Hi, I am trying to find the lines in a pipe delimited file where 11th column has not null values. Any help is appreciated. Need help asap please. thanks in advance. (3 Replies)
Discussion started by: manikms
3 Replies

9. Shell Programming and Scripting

How to grep nth word in line?

my input file content is like this GEFITINIB 403 14 -4.786873 -4.786873 -1.990111 0.000000 0.000000 -1.146266 -39.955912 483 VANDETANIB 404 21 -4.754243 -4.754243 -2.554131 -0.090303 0.000000 -0.244210 -41.615502 193 VANDETANIB 405 21 -4.737541 -4.737541 -2.670195 -0.006006 0.000000 -0.285579... (4 Replies)
Discussion started by: chandu87
4 Replies

10. Shell Programming and Scripting

Find word in a line and output in which line the word occurs / no. of times it occurred

I have a file: file.txt, which contains the following data in it. This is a file, my name is Karl, what is this process, karl is karl junior, file is a test file, file's name is file.txt My name is not Karl, my name is Karl Joey What is your name? Do you know your name and... (3 Replies)
Discussion started by: anuragpgtgerman
3 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 11:19 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy