Sponsored Content
Top Forums Shell Programming and Scripting Finding the pattern and replacing the pattern inside the file Post 302836911 by rajamadhavan on Wednesday 24th of July 2013 11:11:16 PM
Old 07-25-2013
Assuming that you are using the variable after assigning it on the file...(else it will print a blank in place of &xxx string)

Code:
$ perl -pe '$hash{$1}=$2 if /\&(\w+)\=(\w+)/;s/\&(\w++)(?!\=)/$hash{$1}/g;' file
go to &abc=ddd;
 if file found &ccc=10;
 no the value name is ddd;
  and the age is 10;

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Replacing a paragraph between pattern , with the content 4m another file

hi, i wanted to put the output of file f1 into the pattern space of file f2 f1: wjwjwjwjwjwjwj //these line go in file f2 jwjwjwjwjwjjwjw wjwjwjwjjwjwjwj f2: Pattern_start __________ //these are the line to be replaced __________ Pattern_end i m... (4 Replies)
Discussion started by: go4desperado
4 Replies

2. Shell Programming and Scripting

help with finding & replacing pattern in a file

Hi everyone. Could u be so kind and help me with on "simple" shell script? 1. i need to search a file line by line for a pattern. example of a lines in that file 2947 domain = feD,id = 00 0A 02 48 17 1E 1D 39 DE 00 0E 00,Name Values:snNo = f10 Add AttFlag = 0 2. i need to find... (0 Replies)
Discussion started by: dusoo
0 Replies

3. Shell Programming and Scripting

replacing a pattern in a file

Hi guys, i have a pattern that i am searching in a file and i want to extract some of this pattern ... module TS1N65ULPA96X32M4 ( .... i want to extract only TS1N65ULPA96X32M4 part and i do the following sed 's/module \(x*\).*/\1/' name_of_file but this is not quite right. could... (6 Replies)
Discussion started by: ROOZ
6 Replies

4. Shell Programming and Scripting

Finding Last occurance of another pattern when a pattern is found.

Hi, I have two files viz, rak1: $ cat rak1 rak2: $ cat rak2 sdiff rak1 rak2 returns: I want the lines that got modified, changed, or deleted preceding with the section they are in. I have done this so far: (1 Reply)
Discussion started by: rakeshou
1 Replies

5. Shell Programming and Scripting

Need help in sed command ( Replacing a pattern inside a file with a variable value )

Hello, The following sed command is giving error sed: -e expression #1, char 13: unknown option to `s' The sed command is echo "//-----" | sed "s/\/\/---*/$parChk/g" where parChk="//---ee-" How can i print the variable value from sed command ? And is it possible to replace a... (2 Replies)
Discussion started by: frozensmilz
2 Replies

6. Shell Programming and Scripting

CSV: Replacing multiple occurrences inside a pattern

Greatings all, I am coming to seek your knowledge and some help on an issue I can not currently get over. I have been searching the boards but did not find anything close to this matter I am struggling with. I am trying to clean a CSV file and make it loadable for my SQL*Loader. My problem... (1 Reply)
Discussion started by: OCanada
1 Replies

7. Shell Programming and Scripting

Help with matching pattern inside a file

I have a huge file that has roughly 30304 lines. I need to extract specific info from that file. For example, Box 1 > *aaaaaaaajjjj* > hbbvjvj > jdnnfddllll > *dgdfhfekwjh* Box 2 > *aaaaaaa'aj'jjj* > dse hkjuejef bfdw > dyeee > dsewq > *dgdfhfekwjh* >feweiuei Box 3 > *aaaa"aaaaj"jjj* >... (25 Replies)
Discussion started by: Ernst
25 Replies

8. Shell Programming and Scripting

Replacing a pattern in different cases in different columns with a single pattern

Hi All I am having pipe seperated inputs like Adam|PeteR|Josh|PEter Nick|Rave|Simon|Paul Steve|smith|PETER|Josh Andrew|Daniel|StAlin|peter Rick|PETer|ADam|RAVE i want to repleace all the occurrence of peter (in any case pattern PeteR,PEter,PETER,peter,PETer) with Peter so that output... (5 Replies)
Discussion started by: sudeep.id
5 Replies

9. UNIX for Beginners Questions & Answers

Copy pattern inside the file

Hi all, I have files and have a missing record. I need copy the existing record and mark those values up. For example in the below file 11048 is missing. I need to copy 22001 and copy those create the values for 11048. I have 120 set of files and I need to do that on all files. Note the... (8 Replies)
Discussion started by: arunkumar_mca
8 Replies

10. Shell Programming and Scripting

[sed] Finding and sticking the pattern to the beginning of successive lines up to the next pattern

I have a file like below. 2018.07.01, Sunday 09:27 some text 123456789 0 21 0.06 0.07 0.00 2018.07.02, Monday 09:31 some text 123456789 1 41 0.26 0.32 0.00 09:39 some text 456789012 1 0.07 0.09 0.09 09:45 some text 932469494 1 55 0.29 0.36 0.00 16:49 some text 123456789 0 48 0.12 0.15 0.00... (9 Replies)
Discussion started by: father_7
9 Replies
funjoin(1)							SAORD Documentation							funjoin(1)

NAME
funjoin - join two or more FITS binary tables on specified columns SYNOPSIS
funjoin [switches] <ifile1> <ifile2> ... <ifilen> <ofile> OPTIONS
-a cols # columns to activate in all files -a1 cols ... an cols # columns to activate in each file -b 'c1:bvl,c2:bv2' # blank values for common columns in all files -bn 'c1:bv1,c2:bv2' # blank values for columns in specific files -j col # column to join in all files -j1 col ... jn col # column to join in each file -m min # min matches to output a row -M max # max matches to output a row -s # add 'jfiles' status column -S col # add col as status column -t tol # tolerance for joining numeric cols [2 files only] DESCRIPTION
funjoin joins rows from two or more (up to 32) FITS Binary Table files, based on the values of specified join columns in each file. NB: the join columns must have an index file associated with it. These files are generated using the funindex program. The first argument to the program specifies the first input FITS table or raw event file. If "stdin" is specified, data are read from the standard input. Subsequent arguments specify additional event files and tables to join. The last argument is the output FITS file. NB: Do not use Funtools Bracket Notation to specify FITS extensions and row filters when running funjoin or you will get wrong results. Rows are accessed and joined using the index files directly, and this bypasses all filtering. The join columns are specified using the -j col switch (which specifies a column name to use for all files) or with -j1 col1, -j2 col2, ... -jn coln switches (which specify a column name to use for each file). A join column must be specified for each file. If both -j col and -jn coln are specified for a given file, then the latter is used. Join columns must either be of type string or type numeric; it is illegal to mix numeric and string columns in a given join. For example, to join three files using the same key column for each file, use: funjoin -j key in1.fits in2.fits in3.fits out.fits A different key can be specified for the third file in this way: funjoin -j key -j3 otherkey in1.fits in2.fits in3.fits out.fits The -a "cols" switch (and -a1 "col1", -a2 "cols2" counterparts) can be used to specify columns to activate (i.e. write to the output file) for each input file. By default, all columns are output. If two or more columns from separate files have the same name, the second (and subsequent) columns are renamed to have an underscore and a numeric value appended. The -m min and -M max switches specify the minimum and maximum number of joins required to write out a row. The default minimum is 0 joins (i.e. all rows are written out) and the default maximum is 63 (the maximum number of possible joins with a limit of 32 input files). For example, to write out only those rows in which exactly two files have columns that match (i.e. one join): funjoin -j key -m 1 -M 1 in1.fits in2.fits in3.fits ... out.fits A given row can have the requisite number of joins without all of the files being joined (e.g. three files are being joined but only two have a given join key value). In this case, all of the columns of the non-joined file are written out, by default, using blanks (zeros or NULLs). The -b c1:bv1,c2:bv2 and -b1 'c1:bv1,c2:bv2' -b2 'c1:bv1,c2 - bv2' ... switches can be used to set the blank value for columns common to all files and/or columns in a specified file, respectively. Each blank value string contains a comma-separated list of col- umn:blank_val specifiers. For floating point values (single or double), a case-insensitive string value of "nan" means that the IEEE NaN (not-a-number) should be used. Thus, for example: funjoin -b "AKEY:???" -b1 "A:-1" -b3 "G:NaN,E:-1,F:-100" ... means that a non-joined AKEY column in any file will contain the string "???", the non-joined A column of file 1 will contain a value of -1, the non-joined G column of file 3 will contain IEEE NaNs, while the non-joined E and F columns of the same file will contain values -1 and -100, respectively. Of course, where common and specific blank values are specified for the same column, the specific blank value is used. To distinguish which files are non-blank components of a given row, the -s (status) switch can be used to add a bitmask column named "JFILES" to the output file. In this column, a bit is set for each non-blank file composing the given row, with bit 0 corresponds to the first file, bit 1 to the second file, and so on. The file names themselves are stored in the FITS header as parameters named JFILE1, JFILE2, etc. The -S col switch allows you to change the name of the status column from the default "JFILES". A join between rows is the Cartesian product of all rows in one file having a given join column value with all rows in a second file having the same value for its join column and so on. Thus, if file1 has 2 rows with join column value 100, file2 has 3 rows with the same value, and file3 has 4 rows, then the join results in 2*3*4=24 rows being output. The join algorithm directly processes the index file associated with the join column of each file. The smallest value of all the current columns is selected as a base, and this value is used to join equal-valued columns in the other files. In this way, the index files are traversed exactly once. The -t tol switch specifies a tolerance value for numeric columns. At present, a tolerance value can join only two files at a time. (A completely different algorithm is required to join more than two files using a tolerance, somethng we might consider implementing in the future.) The following example shows many of the features of funjoin. The input files t1.fits, t2.fits, and t3.fits contain the following columns: [sh] fundisp t1.fits AKEY KEY A B ----------- ------ ------ ------ aaa 0 0 1 bbb 1 3 4 ccc 2 6 7 ddd 3 9 10 eee 4 12 13 fff 5 15 16 ggg 6 18 19 hhh 7 21 22 fundisp t2.fits AKEY KEY C D ----------- ------ ------ ------ iii 8 24 25 ggg 6 18 19 eee 4 12 13 ccc 2 6 7 aaa 0 0 1 fundisp t3.fits AKEY KEY E F G ------------ ------ -------- -------- ----------- ggg 6 18 19 100.10 jjj 9 27 28 200.20 aaa 0 0 1 300.30 ddd 3 9 10 400.40 Given these input files, the following funjoin command: funjoin -s -a1 "-B" -a2 "-D" -a3 "-E" -b "AKEY:???" -b1 "AKEY:XXX,A:255" -b3 "G:NaN,E:-1,F:-100" -j key t1.fits t2.fits t3.fits foo.fits will join the files on the KEY column, outputting all columns except B (in t1.fits), D (in t2.fits) and E (in t3.fits), and setting blank values for AKEY (globally, but overridden for t1.fits) and A (in file 1) and G, E, and F (in file 3). A JFILES column will be output to flag which files were used in each row: AKEY KEY A AKEY_2 KEY_2 C AKEY_3 KEY_3 F G JFILES ------------ ------ ------ ------------ ------ ------ ------------ ------ -------- ----------- -------- aaa 0 0 aaa 0 0 aaa 0 1 300.30 7 bbb 1 3 ??? 0 0 ??? 0 -100 nan 1 ccc 2 6 ccc 2 6 ??? 0 -100 nan 3 ddd 3 9 ??? 0 0 ddd 3 10 400.40 5 eee 4 12 eee 4 12 ??? 0 -100 nan 3 fff 5 15 ??? 0 0 ??? 0 -100 nan 1 ggg 6 18 ggg 6 18 ggg 6 19 100.10 7 hhh 7 21 ??? 0 0 ??? 0 -100 nan 1 XXX 0 255 iii 8 24 ??? 0 -100 nan 2 XXX 0 255 ??? 0 0 jjj 9 28 200.20 4 SEE ALSO
See funtools(7) for a list of Funtools help pages version 1.4.2 January 2, 2008 funjoin(1)
All times are GMT -4. The time now is 07:49 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy