Sponsored Content
Top Forums Shell Programming and Scripting Combining rows in a text file with a character limit Post 302528731 by rdcwayx on Tuesday 7th of June 2011 10:18:07 PM
Old 06-07-2011
Code:
awk '{X=(X=="")?$0:X OFS $0; if (length(X)>30) {print X;X=""}}
   END{if (X!="") print X}' OFS="|" infile

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

how to strip rows from a text file?

Can an expert kindly write an efficient Linux ksh script that will strip rows with no numbers from a text file? Supposing there are three rows that text file called text.txt : "field1","field2","field3",11,22,33,44 "field1","field2","field3",1,2,3,4 "field1","field2","field3",,,, The... (5 Replies)
Discussion started by: ihot
5 Replies

2. UNIX for Dummies Questions & Answers

find and remove rows from file where multi occurrences of character found

I have a '~' delimited file of 6 - 7 million rows. Each row should contain 13 columns delimited by 12 ~'s. Where there are 13 tildes, the row needs to be removed. Each row contains alphanumeric data and occasionally a ~ ends up in a descriptive field and therefore acts as a delimiter, resulting in... (1 Reply)
Discussion started by: kpd
1 Replies

3. Shell Programming and Scripting

read the text file and print the content character by character..

hello all i request you to give the solution for the following problem.. I want read the text file.and print the contents character by character..like if the text file contains google means..i want to print g go goo goog googl google like this Using unix Shell scripting... without using... (1 Reply)
Discussion started by: samupnl
1 Replies

4. Linux

Splitting a Text File by Rows

Hello, Please help me. I have hundreds of text files composed of several rows of information and I need to separate each row into a new text file. I was trying to figure out how to split the text file into different text files, based on each row of text in the original text file. Here is an... (2 Replies)
Discussion started by: dvdrevilla
2 Replies

5. Shell Programming and Scripting

Duplicate rows in a text file

notes: i am using cygwin and notepad++ only for checking this and my OS is XP. #!/bin/bash typeset -i totalvalue=(wc -w /cygdrive/c/cygwinfiles/database.txt) typeset -i totallines=(wc -l /cygdrive/c/cygwinfiles/database.txt) typeset -i columnlines=`expr $totalvalue / $totallines` awk -F' ' -v... (5 Replies)
Discussion started by: whitecross
5 Replies

6. Shell Programming and Scripting

combining cat output and cutting rows

I have a file that contain the following. -D HTTPD_ROOT="/usr/local/apache" -D SERVER_CONFIG_FILE="conf/httpd.conf" I want a shell script, so that after cat filename and apply the shell script I should get the output as follows. /usr/local/apache/conf/httpd.conf ie cat filename |... (7 Replies)
Discussion started by: anilcliff
7 Replies

7. Shell Programming and Scripting

Combining multiple rows in single row based on certain condition using awk or sed

Hi, I'm using AIX(ksh shell). > cat temp.txt "a","b",0 "c",bc",0 "a1","b1",0 "cc","cb",1 "cc","b2",1 "bb","bc",2 I want the output as: "a","b","c","bc","a1","b1" "cc","cb","cc","b2" "bb","bc" I want to combine multiple lines into single line where third column is same. Is... (1 Reply)
Discussion started by: samuelray
1 Replies

8. Shell Programming and Scripting

Combining rows into columns

hi experts, I have a flat file with below contents Database1 Table1 column1 Database1 Table1 column2 Database1 Table1 column3 Database1 Table1 column4 Database1 Table2 Column1 Database1 Table2 Column2 Database2 Table1 Column1 Database2 Table1 Column2 Database2 Table1 Column3... (9 Replies)
Discussion started by: Selva_2507
9 Replies

9. Shell Programming and Scripting

Replace a character of specified column(s) of all rows in a file

Hi - I have a file "file1" of below format. Its a comma seperated file. Note that each string is enclosed in double quotes. "abc","-0.15","10,000.00","IJK" "xyz","1,000.01","1,000,000.50","OPR" I want the result as: "abc","-0.15","10000.00","IJK" "xyz","1,000.01","1000000.50","OPR" I... (8 Replies)
Discussion started by: njny
8 Replies

10. Shell Programming and Scripting

Character screening and paste into new file in columns instead of rows

QL10169_SAUJANA%SubNetwork=ONRM_ROOT_MO_R,SubNetwork=ERBS_KCRN11,MeContext=QL10169_SAUJANA_5 %External_Link_Failure %X2_link_problem_to_one_or_several_neighbouring_eNodeBs. QL10187_MATANG_JAYA_2%SubNetwork=ONRM_ROOT_MO_R,SubNetwork=ERBS_KUCHING,MeContext=QL10187_MATANG_JAY A_2_3... (2 Replies)
Discussion started by: Ankit Vyas
2 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 10:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy