Sponsored Content
Full Discussion: Creating duplicates in awk
Top Forums Shell Programming and Scripting Creating duplicates in awk Post 302795769 by Homa on Thursday 18th of April 2013 10:37:48 AM
Old 04-18-2013
Creating duplicates in awk

Hi,

I am using Ubuntu 12.04

I have a file as following:

Code:
KHO123
KHO245
KHO456
.
.
.

I want to add a second column of characters to my file but I want to write a script to make this automatic, so, depending on the number of the lines in my first column, I get the string I need repeated in the second column. As an example, I may have 15 rows in the first column and I want to add that specific string for each of the 15 lines in the second column.

Such that:

Code:
KHO123 null
KHO245 null
KHO489 null 
.
.

Thanks a lot for your help.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Awk to find duplicates in 2nd field

I want to find duplicates in file on 2nd field i wrote this code: nawk '{a++} END{for i in a {if (a>1) print}}' temp Could not find whats wrong with this. Appreciate help (5 Replies)
Discussion started by: pinnacle
5 Replies

2. Shell Programming and Scripting

Creating a new line using awk?

Is there a way while using awk to print out lines, to create a new line after every line that is being displayed? (2 Replies)
Discussion started by: puttster
2 Replies

3. Shell Programming and Scripting

Awk Help - duplicates in $1 that match x & y in $2

I'm primarily a "Windows" systems administrator whose been getting his toes in the Linux waters. I am new to programming and advanced scripting so please bear with me and my incomplete example below. I have exported all entries from our DNS zones. I used sed to remove everything other than the... (3 Replies)
Discussion started by: Omaplata
3 Replies

4. Shell Programming and Scripting

awk statement to eliminate the duplicates

consider the below output cat tablextract2.sql CREATE PROCEDURE after72DeleteTgr(id int) BEGIN END $$ Delimiter ; CREATE PROCEDURE after72DeleteTgr(id int) BEGIN END $$ Delimiter ; # # proc_name1="after72DeleteTgr" # # echo "`awk '{if($3~v){a=1}}a;/elimiter\|DELIMITER/{exit}'... (17 Replies)
Discussion started by: vivek d r
17 Replies

5. Shell Programming and Scripting

Find duplicates in column 1 and merge their lines (awk?)

Hi, I have a file (sorted by sort) with 8 tab delimited columns. The first column contains duplicated fields and I need to merge all these identical lines. My input file: comp100002 aaa bbb ccc ddd eee fff ggg comp100003 aba aba aba aba aba aba aba comp100003 fff fff fff fff fff fff fff... (5 Replies)
Discussion started by: falcox
5 Replies

6. Shell Programming and Scripting

Awk: Remove Duplicates

I have the following code for removing duplicate records based on fields in inputfile file & moves the duplicate records in duplicates file(1st Awk) & in 2nd awk i fetch the non duplicate entries in inputfile to tmp file and use move to update the original file. Requirement: Can both the awk... (4 Replies)
Discussion started by: siramitsharma
4 Replies

7. Shell Programming and Scripting

awk remove first duplicates

Hi All, I have searched many threads for possible close solution. But I was unable to get simlar scenario. I would like to print all duplicate based on 3rd column except the first occurance. Also would like to print if it is single entry(non-duplicate). i/P file 12 NIL ABD LON 11 NIL ABC... (6 Replies)
Discussion started by: sybadm
6 Replies

8. Shell Programming and Scripting

awk - Remove duplicates during array build

Greetings Experts, Issue: Within awk script, remove the duplicate occurrences that are space (1 single space character) separated Description: I am processing 2 files using awk and during processing, I am building an array and there are duplicates on this; how can I delete the duplicates... (3 Replies)
Discussion started by: chill3chee
3 Replies

9. Shell Programming and Scripting

awk to Sum columns when other column has duplicates and append one column value to another with Care

Hi Experts, Please bear with me, i need help I am learning AWk and stuck up in one issue. First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique. Second point : For... (1 Reply)
Discussion started by: as7951
1 Replies

10. Shell Programming and Scripting

awk and seen to report duplicates

I have this file: @Muestra-1 agctgcgagctgcgacccgggttatataggaagagacacacacaccccc + !@$#%^&*()@^#&HH!&*(@&#*(FT^%$&*()*&^%@ @Muestra-2 agctgcgagctgcgacccgggttatataggaagagacacacacaccccc + !@$#%^&*()@^#&HH!&*(@&#*(FT^%$&*()*&^%@ @Muestra-3 agctgcgagctgcgacccgggttatataggaagagacacacacaccccc +... (4 Replies)
Discussion started by: Xterra
4 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 04:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy