Sponsored Content
Top Forums UNIX for Beginners Questions & Answers Duplicate and change a column Post 302998935 by Don Cragun on Friday 9th of June 2017 05:58:05 PM
Old 06-09-2017
Let's be clear here... RudiC's code worked perfectly for the problem you presented.

Now you have presented a different problem. And that problem is not clearly stated. We are all supposed to guess at what your real input specification is by looking at two samples. We might guess correctly or we might all be wasting our time making bad guesses.

If what you are trying to do is duplicate the contents of a line separating the original line contents from its duplicate with a <space> and if, and only if, there is a an unsigned decimal number sequence that appears between square brackets with no other characters between those square brackets somewhere on that line then replace the first occurrence of that sequence in the duplicated contents with that number incremented by one, then you might try running something like:
Code:
awk '
match($0, /[[][0-9]+[]]/) {
	print $0, substr($0, 1, RSTART) \
	    (substr($0, RSTART + 1, RLENGTH - 2) + 1) \
	    substr($0, RSTART + RLENGTH - 1)
	next
}
{	print $0, $0
}' file

which, if file contains:
Code:
a[4]
b[4]
c[4]
d[4]_s[3]
e[X]_f[99]_g[123]
f [9] g
h123[h]123[i]123[12345678]123[j]
i[abc]_i[cde]_i[fgh]
j[z1]_2[1y]_3[1x1]

produces the output:
Code:
a[4] a[5]
b[4] b[5]
c[4] c[5]
d[4]_s[3] d[5]_s[3]
e[X]_f[99]_g[123] e[X]_f[100]_g[123]
f [9] g f [10] g
h123[h]123[i]123[12345678]123[j] h123[h]123[i]123[12345679]123[j]
i[abc]_i[cde]_i[fgh] i[abc]_i[cde]_i[fgh]
j[z1]_2[1y]_3[1x1] j[z1]_2[1y]_3[1x1]

Did I make a better guess, or do you think my suggestion is also partially correct?
This User Gave Thanks to Don Cragun For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Change names in a column based on the symbols in another column

If the 4th column has - sign then the names in 3rd column has to change to some user defined names (as shown in output). Thanx input1 1 a aaaaa + 2 b bbbbb + 3 c ccccc + 4 d ddddd + 5 e eeeee + 6 f xxxxx + 8 h hhhhh +... (8 Replies)
Discussion started by: repinementer
8 Replies

2. Shell Programming and Scripting

duplicate a column with awk

I have some tab delimited data and I need to duplicate the second column. It seems like I should just be able to do something simple in awk like, awk '{ print $1, $2, $2, $3 }' (the second field is the one that needs to be duplicated) but I'm not sure how to print from $3 to the end of the... (3 Replies)
Discussion started by: LMHmedchem
3 Replies

3. Shell Programming and Scripting

Change file content 4 column to one Column using script

Hi Gurus, I have file content sample: ,5113955056,,TAgent-Suspend ,5119418233,,TAgent-Suspend ,5102119078,,TAgent-Suspend filenames 120229H5_suspend, 120229H6_unsuspend I receive those files one of directory /home/temp/ I need following: 1. Backup first /home/temp/ file to... (5 Replies)
Discussion started by: thepurple
5 Replies

4. UNIX for Dummies Questions & Answers

awk: duplicate a column into a new one

Hi ! I have a "|" delimited file: field 1|field2|field3|field4 AAA|BBB|CCC|DDD EEE|FFF|GGG|HHH Using awk, I need to duplicate the 2nd column and print it into a 5th new column, like that: output: field 1|field2|field3|field4|field 2 AAA|BBB|CCC|DDD|BBB EEE|FFF|GGG|HHH|FFF Thanks... (1 Reply)
Discussion started by: lucasvs
1 Replies

5. Shell Programming and Scripting

awk or sed: change the color of a column w/o screwing up column spacing

Hey folks. I wrote a little awk script that summarizes /proc/net/dev info and then pipes it to the nix column command to set up column spacing appropriately. Here's some example output: Iface RxMBytes RxPackets RxErrs RxDrop TxMBytes TxPackets TxErrs TxDrop bond0 9 83830... (3 Replies)
Discussion started by: ryran
3 Replies

6. Shell Programming and Scripting

Help with duplicate column 1 data

Input file Q6GZV8 AY548484>AAT09676.1>YP_031595.1>2947737>CLSP2512393 P0C9E9 AY261366 P0C9K3 AY261361>IPR004848>PF01639 P0C9I4 AY261363>IPR004848 Desired output file Q6GZV8 AY548484 Q6GZV8 AAT09676.1 Q6GZV8 YP_031595.1 Q6GZV8 2947737 Q6GZV8 CLSP2512393 P0C9E9 AY261366... (3 Replies)
Discussion started by: perl_beginner
3 Replies

7. Shell Programming and Scripting

Change column to row base on column 2

Hi Guys, Input.txt L194 A -118.2 L194 B -115.1 L194 C -118.7 L196 A 0 L196 C 0 L197 A -111.2 L197 B -118.9 L197 C -119.9 L199 A -120.4 L199 B -119.9 ... (2 Replies)
Discussion started by: asavaliya
2 Replies

8. Shell Programming and Scripting

Duplicate third column to every line

Dear All, I have file input like this: INP901 5173 4114 INP902 5227 INP903 5284 INP904 5346 INP905 5400 INP906 5456 INP907 5511 INP908 5572 INP909 5622 INP910 5678 INP911 5739 INP912 5796 INP913 5845 INP914 5910 INP915 5965 (2 Replies)
Discussion started by: attila
2 Replies

9. Shell Programming and Scripting

awk to sum a column based on duplicate strings in another column and show split totals

Hi, I have a similar input format- A_1 2 B_0 4 A_1 1 B_2 5 A_4 1 and looking to print in this output format with headers. can you suggest in awk?awk because i am doing some pattern matching from parent file to print column 1 of my input using awk already.Thanks! letter number_of_letters... (5 Replies)
Discussion started by: prashob123
5 Replies

10. Shell Programming and Scripting

Find duplicate values in specific column and delete all the duplicate values

Dear folks I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Discussion started by: sajmar
4 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 03:53 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy