Sponsored Content
Top Forums Shell Programming and Scripting Append specific lines to a previous line based on sequential search criteria Post 302346066 by summer_cherry on Friday 21st of August 2009 01:49:39 AM
Old 08-21-2009
Code:
sed -n '/^[0-9]\{9\}/{
        1{h;}
        1!{
        	${x;s/\n//g;p;x;p;}
        	$!{x;s/\n//g;p;}
        }
        }
        /^[0-9]\{9\}/!{
        H
        }'

 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Select records based on search criteria on first column

Hi All, I need to select only those records having a non zero record in the first column of a comma delimited file. Suppose my input file is having data like: "0","01/08/2005 07:11:15",1,1,"Created",,"01/08/2005" "0","01/08/2005 07:12:40",1,1,"Created",,"01/08/2005"... (2 Replies)
Discussion started by: shashi_kiran_v
2 Replies

2. Shell Programming and Scripting

How to use sed to search for string and Print previous two lines and current line

Hello, Can anybody help me to correct my sed syntax to find the string and print previous two lines and current line and next one line. i am using string as "testing" netstat -v | sed -n -e '/test/{x;2!p;g;$!N;p;D;}' -e h i am able to get the previous line current line next line but... (1 Reply)
Discussion started by: nmadhuhb
1 Replies

3. Shell Programming and Scripting

Delete new lines based on search criteria

Hi all! A bit of background: I am trying to create a script that formats SQL statements. I have gotten so far as to add new lines based on certain match criteria like commas, keywords etc. In the process, I end up adding newlines where I don't want. For example: substr(colName, 1, 10)... (3 Replies)
Discussion started by: jayarkay
3 Replies

4. Shell Programming and Scripting

Extract data based on specific search criteria

I have a huge file (about 2 millions records) contains data separated by “,” (comma). As part of the requirement, I can't change the format. The objective is to remove some of the records with the following condition. If the 23rd field on each line start with 302 , I need to remove that from the... (4 Replies)
Discussion started by: jaygamini
4 Replies

5. Shell Programming and Scripting

Merging Lines based on criteria

Hello, Need help with following scenario. A file contains following text: {beginning of file} New: This is a new record and it is not on same line. Since I have lost touch with script take this challenge and bring all this in one line. New: Hello losttouch. You seem to be struggling... (4 Replies)
Discussion started by: losttouch
4 Replies

6. Shell Programming and Scripting

Need To Delete Lines Based On Search Criteria

Hi All, I have following input file. I wish to retain those lines which match multiple search criteria. The search criteria is stored in a variable seperated from each other by comma(,). SEARCH_CRITERIA = "REJECT, DUPLICATE" Input File: ERROR,MYFILE_20130214_11387,9,37.75... (3 Replies)
Discussion started by: angshuman
3 Replies

7. Shell Programming and Scripting

Append next line to previous lines when NF is less than 0

Hi All, This is very urgent, I've a data file with 1.7 millions rows in the file and the delimiter is cedilla and I need to format the data in such a way that if the NF in the next row is less than 1, it will append that value to previous line. Any help will be appricated. Thanks,... (17 Replies)
Discussion started by: cumeh1624
17 Replies

8. Shell Programming and Scripting

Copying section of file based on search criteria

Hi Guru's, I am new to unix scripting. I have a huge file with user details in it(file2) and I have another file with a list of users(file1). Script has to search a user from file1 and get all the associated lines from file2. Example: fiel1: cn=abc cn=DEF cn=xyx File 2: dn:... (10 Replies)
Discussion started by: Samingla
10 Replies

9. Shell Programming and Scripting

Returning multiple outputs of a single line based on previous repeated lines

Hello, I am trying to return a time multiple times from a file that has varying output just before the time instance, i.e. cat jumped cat jumped cat jumped time = 1.1 cat jumped cat jumped time = 1.2 cat jumped cat jumped time = 1.3 In this case i would like to output a time.txt... (6 Replies)
Discussion started by: ryddner
6 Replies

10. Shell Programming and Scripting

awk to print specific line in file based on criteria

In the file below I am trying to extract a specific instance of path, if the adjacent plugin": "/rundb/api/v1/plugin/49/. Thank you :). file "path": "/results/analysis/output/Home/Auto_user_S5-00580-4-Medexome_65_028/plugin_out/FileExporter_out.52", "plugin": "/rundb/api/v1/plugin/49/",... (8 Replies)
Discussion started by: cmccabe
8 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 06:13 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy