Sponsored Content
Top Forums UNIX for Dummies Questions & Answers How to generate multiple lines in a text file? Post 302480807 by vic005 on Thursday 16th of December 2010 01:51:01 AM
Old 12-16-2010
Thanks for the code, but the variable A I need should be more than 1000. Typing 1000 numbers in the script is not practical.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

removing multiple lines of text in a file

Hi, I'm trying to remove multiple lines of text based off a series of different words and output it to a new file The document contains a ton of data but i want to delete any line that has the following mx1.rr.biz.com or ns2.ri.biz.com i tried using grep -v filename "mx1.rr.biz.com" >... (3 Replies)
Discussion started by: spartan22
3 Replies

2. Shell Programming and Scripting

[bash help]Adding multiple lines of text into a specific spot into a text file

I am attempting to insert multiple lines of text into a specific place in a text file based on the lines above or below it. For example, Here is a portion of a zone file. IN NS ns1.domain.tld. IN NS ns2.domain.tld. IN ... (2 Replies)
Discussion started by: cdn_humbucker
2 Replies

3. UNIX for Dummies Questions & Answers

Help please, extract multiple lines from a text file

Hi all, I need to extract lines between the lines 'RD' and 'QA' from a text file (following). there are more that one of such pattern in the file and I need to extract all of them. however, the number of lines between them is varied in the file. Therefore, I can not just use 'grep -A' command.... (6 Replies)
Discussion started by: johnshembb
6 Replies

4. UNIX for Dummies Questions & Answers

JOINING MULTIPLE LINES IN A TEXT FILE USING GAWK

sir... am having a data file of customer master., containing some important fields as a set one line after another., what i want is to have one set of these fields(rows) one after another in line.........then the second set... and so on... till the last set completed. I WANT THE DATA... (0 Replies)
Discussion started by: KANNI786
0 Replies

5. UNIX for Dummies Questions & Answers

print multiple lines from text file based on pattern list

I have a text file with a list of items/patterns: ConsensusfromCGX_alldays_trimmedcollapsedfilteredreadscontiglist(229095contigs)contig12238 ConsensusfromCGX_alldays_trimmedcollapsedfilteredreadscontiglist(229095contigs)contig34624... (1 Reply)
Discussion started by: Oyster
1 Replies

6. Shell Programming and Scripting

Extracting Multiple Lines from a Text File

Hello. I am sorry if this is a common question but through all my searching, I haven't found an answer which matches what I want to do. I am looking for a sed command that will parse through a large text file and extract lines that start with specific words (which are repeated throughout the... (4 Replies)
Discussion started by: MrDumbQuestion
4 Replies

7. Shell Programming and Scripting

Process multiple lines in a text file

Hi All I have text file like this: a=21ej c=3tiu32 e=hydkehw f=hgdiuw g=jhdkj a=klkjhvl b=dlkjhyfd a=yo c=8732 Any way I can process data from first a to just before of second a, and then second a to just before of 3rd one. Just fetching records like that will help, I mean... (3 Replies)
Discussion started by: sandipjee
3 Replies

8. UNIX for Dummies Questions & Answers

How to grep multiple lines from a text file using another text file?

I would like to use grep to select multiple lines from a text file using a single-column text file. Basically I want to only select lines from the first text file where the second column of the first text file matches the second text file. How do I go about doing that? Thanks! (5 Replies)
Discussion started by: evelibertine
5 Replies

9. Shell Programming and Scripting

Remove multiple lines from a text file

Hi I have a text file named main.txt with 10,000 lines. I have another file with a list of line numbers (around 1000) of the lines to be deleted from main.txt file. I tried with sed but it removes only a range of line numbers. Thanks for any help!! (1 Reply)
Discussion started by: prvnrk
1 Replies

10. Shell Programming and Scripting

Join multiple lines from text file

Hi Guys, Could you please advise how to join multiple details lines into single row, with HEADER 1 as the record separator and comma(,) as the field separator. Input: HEADER 1, HEADER 2, HEADER 3, 11,22,33, COLUMN1,COLUMN2,COLUMN3, AA1, BB1, CC1, END: ABC HEADER 1, HEADER 2,... (3 Replies)
Discussion started by: budz26
3 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 02:10 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy