Sponsored Content
Full Discussion: awk for replacing line feed
Top Forums Shell Programming and Scripting awk for replacing line feed Post 302841625 by lokaish23 on Wednesday 7th of August 2013 04:25:15 PM
Old 08-07-2013
Actually Not as i just provided sample data.
Best regular expression to look for is
Code:
|" \n "

Because the line feed is coming right after "My_first name" field and there are more fields after "My_other" Smilie
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

replace last form feed with line feed

Hi I have a file with lots of line feeds and form feeds (page break). Need to replace last occurrence of form feed (created by - echo "\f" ) in the file with line feed. Please advise how can i achieve this. TIA Prvn (5 Replies)
Discussion started by: prvnrk
5 Replies

2. Shell Programming and Scripting

Get the 1st 99 characters and add new line feed at the end of the line

I have a file with varying record length in it. I need to reformat this file so that each line will have a length of 100 characters (99 characters + the line feed). AU * A01 EXPENSE 6990370000 CWF SUBC TRAVEL & MISC MY * A02 RESALE 6990788000 Y... (3 Replies)
Discussion started by: udelalv
3 Replies

3. Shell Programming and Scripting

awk remove line feed

Hi, I've this file: 1, 2, 3, 4, 5, 6, I need to remove the line feed LF every 3 row. 1,2,3, 4,5,6, Thanks in advance, Alfredo (5 Replies)
Discussion started by: alfreale
5 Replies

4. Shell Programming and Scripting

Want to remove a line feed depending on number of tabs in a line

Hi! I have been struggling with a large file that has stray end of line characters. I am working on a Mac (Lion). I mention this only because I have been mucking around with fixing my problem using sed, and I have learned far more than I wanted to know about Unix and Mac eol characters. I... (1 Reply)
Discussion started by: user999991
1 Replies

5. Shell Programming and Scripting

Replacing last line with awk and change the file name

Hi Guys, I am having a set of date format files files where I am performing the below set of operations in the files . I Need to replace the last line value with specific date which is a pipe delimited file. for egf1_20140101.txt aa|aus|20140101|yy bb|nz|20140101|yy . .... (19 Replies)
Discussion started by: rohit_shinez
19 Replies

6. Shell Programming and Scripting

Replacing multiple line patterns with awk

Hi forum, Can you please help me understand how to look for and replace the below pattern (containing line breaks) and return a new result? Rules: Must match the 3 line pattern and return a 1 line result. I have found solutions with sed, but it seems that sed installed in my system is... (5 Replies)
Discussion started by: demmel
5 Replies

7. UNIX for Dummies Questions & Answers

awk command not replacing in first line

As per requirement if column 2 is NULL then 'N' ELSE 'Y'. I have written below awk code. But it is not replacing values for first line. :confused: cat temp.txt 1|abc|3 1||4 1|11|c awk -F'|' '{if($2==""){$2="N"}else{$2="Y"} print $0 } {OFS="|"} ' < temp.txt 1 Y 3 ... (4 Replies)
Discussion started by: max_hammer
4 Replies

8. Shell Programming and Scripting

awk issue splitting a fixed-width file containing line feed in data

Hi Forum. I have the following script that splits a large fixed-width file into smaller multiple fixed-width files based on input segment type. The main command in the script is: awk -v search_col_pos=$search_col_pos -v search_str_len=$search_str_len -v segment_type="$segment_type"... (8 Replies)
Discussion started by: pchang
8 Replies

9. UNIX for Beginners Questions & Answers

awk Command to add Carriage Return and Line Feed

Hello, Can someone please share a Simple AWK command to append Carriage Return & Line Feed to the end of the file, If the Carriage Return & Line Feed does not exist ! Thanks (16 Replies)
Discussion started by: rosebud123
16 Replies

10. Shell Programming and Scripting

Getting an unexpected newline in my while loop line-by-line feed

Hi, I'm trying to get a line returned as is from the below input.csv file in Bash in Linux, and somehow I get an unexpected newline in the middle of my input. Here's a sample line in input.csv $> more input.csv TEST_SYSTEM,DUMMY@GMAIL.COM|JULIA H|BROWN And here's a very basic while loop... (7 Replies)
Discussion started by: ChicagoBlues
7 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 02:54 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy