Sponsored Content
Top Forums Shell Programming and Scripting Selective Replace awk column values Post 302792531 by pravin27 on Wednesday 10th of April 2013 02:39:11 PM
Old 04-10-2013
Code:
#cat variable.txt
NEGATIVE=3
NEG-DID=2
NEG-WARN=4
# cat file.txt
2860377|"DATA1"|"DATA2"|"65343"|"DATA2"|"DATA4"|"11"|"DATA5"|"DATA6"|"65343"|"DATA7"|"0"|"8"|"1"|"NEGATIVE"
32340377|"DATA1"|"DATA2"|"65343"|"DATA2"|"DATA4"|"11"|"DATA5"|"DATA6"|"65343"|"DATA7"|"0"|"8"|"1"|"NEG-DID"
72340377|"DATA1"|"DATA2"|"65343"|"DATA2"|"DATA4"|"11"|"DATA5"|"DATA6"|"65343"|"DATA7"|"0"|"8"|"1"|"NEG-WARN"
#awk -F"|" 'NR==FNR{split($0,a,"=");arr[a[1]]=a[2];next}
{gsub(/\"/,"",$15)
$15=arr[$15]}1' OFS="|" variable.txt file.txt

This User Gave Thanks to pravin27 For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

replace a column values with the first value in column

Hi All, I have a file which has data in following format: "Body_Model","2/1/2007","2/1/2007" "CSCH74","0","61" "CSCS74","0","647" "CSCX74","0","3" "CSYH74","0","299" "CSYS74","0","2514" "CSYX74","0","3" "Body_Model","3/1/2007","3/1/2007" "CSCH74","0","88" "CSCS74","0","489"... (3 Replies)
Discussion started by: sumeet
3 Replies

2. Shell Programming and Scripting

replace the column values.

I have the below file ...where some of the column values should replaced with desired values ....below file u can find that 3 column where ever 'AAA' comes should replaced with ' CC ' NOTE : we have to pass the column number ,AAA,CC (modified value) as the parameters to the code. ... (6 Replies)
Discussion started by: charandevu
6 Replies

3. Shell Programming and Scripting

How to pick values from column based on key values by usin AWK

Dear Guyz:) I have 2 different input files like this. I would like to pick the values or letters from the inputfile2 based on inputfile1 keys (A,F,N,X,Z). I have done similar task by using awk but in that case the inputfiles are similar like in inputfile2 (all keys in 1st column and values in... (16 Replies)
Discussion started by: repinementer
16 Replies

4. Shell Programming and Scripting

for each different entry in column 1 extract maximum values from column 2 in unix/awk

Hello, I have 2 columns (1st column has multiple entries but the corresponding values in the column 2 may be the same or different.) however I want to extract unique values for each entry in column 1 by assigning the max value from column 2 SDF4 -0.211654 SDF4 0.978068 ... (1 Reply)
Discussion started by: Diya123
1 Replies

5. UNIX for Dummies Questions & Answers

Selective Replacements: Using sed or awk to replace letters with numbers in a very specific way

Hello all. I am a beginner UNIX user who is using UNIX to work on a bioinformatics project for my university. I have a bit of a complicated issue in trying to use sed (or awk) to "find and replace" bases (letters) in a genetics data spreadsheet (converted to a text file, can be either... (3 Replies)
Discussion started by: Mince
3 Replies

6. Shell Programming and Scripting

How to substract selective values in multi row, multi column file (using awk or sed?)

Hi, I have a problem where I need to make this input: nameRow1a,text1a,text2a,floatValue1a,FloatValue2a,...,floatValue140a nameRow1b,text1b,text2b,floatValue1b,FloatValue2b,...,floatValue140b look like this output: nameRow1a,text1b,text2a,(floatValue1a - floatValue1b),(floatValue2a -... (4 Replies)
Discussion started by: nricardo
4 Replies

7. Shell Programming and Scripting

Replace column values from other file

I have one file as it has the following format File1 S No Site IP Address 1 Australia 192.168.0.1/26 2 Australia 192.168.0.2/26 3 Australia 192.168.0.3/26 I need awk/sed command to replace the column2 value ( under Site) with some other... (8 Replies)
Discussion started by: samaritan
8 Replies

8. Shell Programming and Scripting

Trying to get an awk script to replace values in column

I'm trying to make an awk script to compare values I've set as var1, var2, and var3 earlier in the script to the values in the userinputted column of four text files called Node1.txt, Node2.txt, Node3.txt, and Node4.txt and then replace the values in that userinputted column with either ttt or gcc,... (8 Replies)
Discussion started by: Eric1
8 Replies

9. Shell Programming and Scripting

Do replace operation and awk to sum multiple columns if another column has duplicate values

Hi Experts, Please bear with me, i need help I am learning AWk and stuck up in one issue. First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique. Second point : For... (12 Replies)
Discussion started by: as7951
12 Replies

10. Shell Programming and Scripting

awk script to append suffix to column when column has duplicated values

Please help me to get required output for both scenario 1 and scenario 2 and need separate code for both scenario 1 and scenario 2 Scenario 1 i need to do below changes only when column1 is CR and column3 has duplicates rows/values. This inputfile can contain 100 of this duplicated rows of... (1 Reply)
Discussion started by: as7951
1 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 09:20 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy