Split column into two on first occurence of any number


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Split column into two on first occurence of any number
# 8  
Old 07-07-2015
s is substitution. Whatever matches is substituted. The & gives the substituted part back.
1.
Code:
sed 's/[0-9]/|&/' num.txt

2.
Code:
sed 's/^[^0-9]*/&|/' num.txt

Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

awk split columns to row after N number of column

I want to split this with every 5 or 50 depend on how much data the file will have. And remove the comma on the end Source file will have 001,0002,0003,004,005,0006,0007,007A,007B,007C,007E,007F,008A,008C Need Output from every 5 tab and remove the comma from end of each row ... (4 Replies)
Discussion started by: ranjancom2000
4 Replies

2. Shell Programming and Scripting

Print occurence number

Hi folks, I have a file with lots of lines in a text file,i need to print the occurence number after sorting based on the first column as shown below, thanks in advance. sam,dallas,20174 sam,houston,20175 sam,atlanta,20176 jack,raleigh,457865 jack,dc,7845 john,sacramento,4567 ... (4 Replies)
Discussion started by: tech_frk
4 Replies

3. Shell Programming and Scripting

Split column data if the table has n number of column's with some record

Split column data if the table has n number of column's with some record then how to split n number of colmn's line by line with records Table --------- Col1 col2 col3 col4 ....................col20 1 2 3 4 .................... 20 a b c d .................... v ... (11 Replies)
Discussion started by: Priti2277
11 Replies

4. Shell Programming and Scripting

Split column data if the table has n number of column's

please write a shell script Table -------------------------- 1 2 3 a b c 3 4 5 c d e 7 8 9 f g h Output should be like this --------------- 1 2 3 3 4 5 7 8 9 a b c c d e f g h (1 Reply)
Discussion started by: Priti2277
1 Replies

5. UNIX for Dummies Questions & Answers

Add number of occurence

Good morning, I need help to add number of occurence based on column 1 & column 5 file input 81161267334|1|100000|81329998077|20150902 81161267334|1|50000|82236060161|20150902 81161268637|1|25000|81329012229|20150911 81161269307|1|25000|81327019134|20150901... (3 Replies)
Discussion started by: radius
3 Replies

6. Shell Programming and Scripting

Split a fixed length file bases on last occurence of string

Hi, I need to split a file based on last occurece of a string. PFB the explanation I have a file in following format aaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbb ccccccccccccccccccccccccccc ddddddddddddddddddddddddddd 3186rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr... (4 Replies)
Discussion started by: Neelkanth
4 Replies

7. Shell Programming and Scripting

awk code to ignore the first occurence unknown number of rows in a data column

Hello experts, Shown below is the 2 column sample data(there are many data columns in actual input file), Key, Data A, 1 A, 2 A, 2 A, 3 A, 1 A, 1 A, 1 I need the below output. Key, Data A, 2 A, 2 A, 3 A, 1 A, 1 A, 1 (2 Replies)
Discussion started by: ks_reddy
2 Replies

8. Shell Programming and Scripting

Number each occurence using sed

hi, I need to number each occurrence of a pattern within a file using sed. Given object 0000 object 111 object 222 I need following 1.object 0000 2.object 111 3.object 222 (5 Replies)
Discussion started by: xerox
5 Replies

9. Shell Programming and Scripting

Split single file into multiple files based on the number in the column

Dear All, I would like to split a file of the following format into multiple files based on the number in the 6th column (numbers 1, 2, 3...): ATOM 1 N GLY A 1 -3.198 27.537 -5.958 1.00 0.00 N ATOM 2 CA GLY A 1 -2.199 28.399 -6.617 1.00 0.00 ... (3 Replies)
Discussion started by: tomasl
3 Replies

10. UNIX for Dummies Questions & Answers

grep the number of first occurence

File1.txt ....... ....... OMC LA OMC LK OMC LS ........ ........ Above is the content of File1.txt, i want to get the Number of Occurence to order, lets say if OMC LA = 1, OMC LS=3, and OMC LK=2.. omc_ident="OMC LA" or "OMC LK" or "OMC LS" omc_num=`grep '^OMC' File1.txt| grep... (4 Replies)
Discussion started by: neruppu
4 Replies
Login or Register to Ask a Question
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)