Sponsored Content
Top Forums Shell Programming and Scripting sed parser behaving strange on replacing multiple words in multiple files Post 303008986 by RudiC on Friday 8th of December 2017 05:02:33 AM
Old 12-08-2017
Not digging too deep, I can see that
- your single quoting of the sed commands is consistently wrong - the entire respective command needs to be quoted.
- the redirection into the to-be-modified file truncates it before anything is read from it, so I'm very surprised that only several files should be completely blank.
- running 7 seds (i.e. creating 7 processes) for 4000 file is resource hungry and may become somewhat slow.
- running a for loop from 0 to 4000 with i the loop variable, why do you use another j variable?

How about
Code:
for FN in clus*phy; do sed '/aver\|anid\|anig\|acar\|azon\|awen\|akaw/ s/_[^ ]*//' $FN > ${FN}.tmp; done

Move the .tmp files to the origial ones when happy.

Last edited by RudiC; 12-08-2017 at 06:28 AM..
This User Gave Thanks to RudiC For This Post:
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

renaming multiple files while replacing string

hi, i've found a few examples of scripts to do this but for some reason can't get them to work properly. basically i have some dirs with a few hundred files mixed in with a bunch of other files that were made with a typo in part of them. long-file-names-tyo-example.ext want to be able... (2 Replies)
Discussion started by: kevin9
2 Replies

2. Shell Programming and Scripting

Replacing string in multiple files

Hi, I need to replace the string 'abcd' with 'xyz' in a file sample.xml This sample.xml is also present in the subdirectories of the current directory. Eg, If I am in /user/home/ the sample.xml if present in /user/home/ /user/home/folder1/ /user/home/folder2/... (3 Replies)
Discussion started by: arulanandsp
3 Replies

3. Shell Programming and Scripting

Replacing text from multiple files at multiple location

Hi, I have many files scattered in all different folders. I want to replace the text within all the files using a single command ( awk, sed...) Is it possible? example find all the files in which there is text "memory" and replace it with "branded_memories". the files can be at the... (2 Replies)
Discussion started by: rudoraj
2 Replies

4. UNIX for Dummies Questions & Answers

best method of replacing multiple strings in multiple files - sed or awk? most simple preferred :)

Hi guys, say I have a few files in a directory (58 text files or somthing) each one contains mulitple strings that I wish to replace with other strings so in these 58 files I'm looking for say the following strings: JAM (replace with BUTTER) BREAD (replace with CRACKER) SCOOP (replace... (19 Replies)
Discussion started by: rich@ardz
19 Replies

5. Shell Programming and Scripting

Counting occurrences of all words in multiple files

Hey Unix gurus, I would like to count the number occurrences of all the words (regardless of case) across multiple files, preferably outputting them in descending order of occurrence. This is well beyond my paltry shell scripting ability. Researching, I can find many scripts/commands that... (4 Replies)
Discussion started by: twjolson
4 Replies

6. Shell Programming and Scripting

How to count the number of occurrence of words from multiple files?

File 1 aaa bbb ccc File 2 aaa xxx zzz bbb File 3 aaa bbb xxx Output: (4 Replies)
Discussion started by: Misa-Misa
4 Replies

7. Shell Programming and Scripting

USING sed to remove multiple strings/words from a line

Hi I use sed comnand to remove occurance of one workd from a line. However I need to removed occurance of dufferent words in ne line. Original-1 Hi this is the END of my begining Comand sed s/"END"/"start"/g Output-1 Hi this is the start of my beginig But I have more... (9 Replies)
Discussion started by: mnassiri
9 Replies

8. Shell Programming and Scripting

Replacing old TNS entries with New one in multiple files

I have requirement to replace old TNS entries with New one in multiple files. one file may contain more then one occurrence of tns. Example: Below is the one of occurrence in a current file(s). i am interested to replace only red part. <connection-pool name="Google_APP_CP"... (4 Replies)
Discussion started by: KDDubai333
4 Replies

9. Shell Programming and Scripting

Replacing matched patterns in multiple files with awk

Hello all, I have since given up trying to figure this out and used sed instead, but I am trying to understand awk and was wondering how someone might do this in awk. I am trying to match on the first field of a specific file with the first field on multiple files, and append the second field... (2 Replies)
Discussion started by: karlmalowned
2 Replies

10. UNIX for Beginners Questions & Answers

Issue with search and replacing multiple items in multiple files

Im having an issue when trying to replace the first column with a new set of values in multiple files. The results from the following code only replaces the files with the last set of values in val.txt. I want to replace all the files with all the values. for date in {1..31} do for val in... (1 Reply)
Discussion started by: ncwxpanther
1 Replies
funidx(7)							SAORD Documentation							 funidx(7)

NAME
Funidx - Using Indexes to Filter Rows in a Table SYNOPSIS
This document contains a summary of the user interface for filtering rows in binary tables with indexes. DESCRIPTION
Funtools Table Filtering allows rows in a table to be selected based on the values of one or more columns in the row. Because the actual filter code is compiled on the fly, it is very efficient. However, for very large files (hundreds of Mb or larger), evaluating the filter expression on each row can take a long time. Therefore, funtools supports index files for columns, which are used automatically during fil- tering to reduce dramatically the number of row evaluations performed. The speed increase for indexed filtering can be an order of magni- tude or more, depending on the size of the file. The funindex program creates an index on one or more columns in a binary table. For example, to create an index for the column pi in the file huge.fits, use: funindex huge.fits pi This will create an index named huge_pi.idx. When a filter expression is initialized for row evaluation, funtools looks for an index file for each column in the filter expression. If found, and if the file modification date of the index file is later than that of the data file, then the index will be used to reduce the number of rows that are evaluated in the filter. When Spatial Region Filtering is part of the expression, the columns associated with the region are checked for index files. If an index file is not available for a given column, then in general, all rows must be checked when that column is part of a filter expression. This is not true, however, when a non-indexed column is part of an AND expression. In this case, only the rows that pass the other part of the AND expression need to be checked. Thus, in some cases, filtering speed can increase significantly even if all columns are not indexed. Also note that certain types of filter expression syntax cannot make use of indices. For example, calling functions with column names as arguments implies that all rows must be checked against the function value. Once again, however, if this function is part of an AND expres- sion, then a significant improvement in speed still is possible if the other part of the AND expression is indexed. For example, note below the dramatic speedup in searching a 1 Gb file using an AND filter, even when one of the columns (pha) has no index: time fundisp huge.fits'[idx_activate=0,idx_debug=1,pha=2348&&cir 4000 4000 1]' "x y pha" x y pha ---------- ----------- ---------- 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 42.36u 13.07s 6:42.89 13.7% time fundisp huge.fits'[idx_activate=1,idx_debug=1,pha=2348&&cir 4000 4000 1]' "x y pha" x y pha ---------- ----------- ---------- idxeq: [INDEF] idxand sort: x[ROW 8037025:8070128] y[ROW 5757665:5792352] idxand(1): INDEF [IDX_OR_SORT] idxall(1): [IDX_OR_SORT] 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 1.55u 0.37s 1:19.80 2.4% When all columns are indexed, the increase in speed can be even more dramatic: time fundisp huge.fits'[idx_activate=0,idx_debug=1,pi=770&&cir 4000 4000 1]' "x y pi" x y pi ---------- ----------- ---------- 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 42.60u 12.63s 7:28.63 12.3% time fundisp huge.fits'[idx_activate=1,idx_debug=1,pi=770&&cir 4000 4000 1]' "x y pi" x y pi ---------- ----------- ---------- idxeq: pi start=9473025,stop=9492240 => pi[ROW 9473025:9492240] idxand sort: x[ROW 8037025:8070128] y[ROW 5757665:5792352] idxor sort/merge: pi[ROW 9473025:9492240] [IDX_OR_SORT] idxmerge(5): [IDX_OR_SORT] pi[ROW] idxall(1): [IDX_OR_SORT] 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 1.67u 0.30s 0:24.76 7.9% The miracle of indexed filtering (and indeed, of any indexing) is the speed of the binary search on the index, which is of order log2(n) instead of n. (The funtools binary search method is taken from http://www.tbray.org/ongoing/When/200x/2003/03/22/Binary, to whom grateful acknowledgement is made.) This means that the larger the file, the better the performance. Conversely, it also means that for small files, using an index (and the overhead involved) can slow filtering down somewhat. Our tests indicate that on a file containing a few tens of thousands of rows, indexed filtering can be 10 to 20 percent slower than non-indexed filtering. Of course, your mileage will vary with con- ditions (disk access speed, amount of available memory, process load, etc.) Any problem encountered during index processing will result in indexing being turned off, and replaced by filtering all rows. You can turn filtering off manually by setting the idx_activate variable to 0 (in a filter expression) or the FILTER_IDX_ACTIVATE environment variable to 0 (in the global environment). Debugging output showing how the indexes are being processed can be displayed to stderr by setting the idx_debug variable to 1 (in a filter expression) or the FILTER_IDX_DEBUG environment variable to 1 (in the global environment). Currently, indexed filtering only works with FITS binary tables and raw event files. It does not work with text files. This restriction might be removed in a future release. SEE ALSO
See funtools(7) for a list of Funtools help pages version 1.4.2 January 2, 2008 funidx(7)
All times are GMT -4. The time now is 10:55 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy