Sponsored Content
Top Forums Shell Programming and Scripting Multi thread awk command for faster performance Post 302634275 by chetan.c on Thursday 3rd of May 2012 07:04:37 AM
Old 05-03-2012
Hi,

The script which im running is executed simultaneously for multiple folders.
So is there a chance of overlapping of data beacause of this multiple process?

Im seeing overlapping of data here.
am i doing something wrong?

Thanks,
Chetan.C

---------- Post updated at 06:04 AM ---------- Previous update was at 04:56 AM ----------

This is the code

Code:
 
#!/bin/bash 
 
function fast {
cd $1
awk -v ORS="" 'FNR==1 { printf("\n"FILENAME"\n") }; /<.*/ , /.*<\/.*>/' *.txt
}
for dir in `find /opt/app/idss/data01/cc002h/computes/test_scripts/Test_files/ -type d -name "TLT*"`; do
fast $dir &
done
wait


Last edited by chetan.c; 05-03-2012 at 10:05 AM.. Reason: Missed "&" while pasting the code.
 

9 More Discussions You Might Find Interesting

1. Programming

Multi threading using posix thread library

hi all, can anyone tell me some good site for the mutithreading tutorials, its application, and some code examples. -sushil (2 Replies)
Discussion started by: shushilmore
2 Replies

2. UNIX for Dummies Questions & Answers

Which command will be faster? y?

i)wc -c/etc/passwd|awk'{print $1}' ii)ls -al/etc/passwd|awk'{print $5}' (4 Replies)
Discussion started by: karthi_g
4 Replies

3. Programming

Multi thread data sharing problem in uclinux

hello, I have wrote a multi thread application to run under uclinux. the problem is that threads does not share data. using the ps command it shows a single process for each thread. I test the application under Ubuntu 8.04 and Open Suse 10.3 with 2.6 kernel and there were no problems and also... (8 Replies)
Discussion started by: mrhosseini
8 Replies

4. Shell Programming and Scripting

Multi thread shell programming

I have a unix directory where a million of small text files getting accumulated every week. As of now there is a shell batch program in place which merges all the files in this directory into a single file and ftp to other system. Previously the volume of the files would be around 1 lakh... (2 Replies)
Discussion started by: vk39221
2 Replies

5. Shell Programming and Scripting

Faster way to use this awk command

awk "/May 23, 2012 /,0" /var/tmp/datafile the above command pulls out information in the datafile. the information it pulls is from the date specified to the end of the file. now, how can i make this faster if the datafile is huge? even if it wasn't huge, i feel there's a better/faster way to... (8 Replies)
Discussion started by: SkySmart
8 Replies

6. Shell Programming and Scripting

Making a faster alternative to a slow awk command

Hi, I have a large number of input files with two columns of numbers. For example: 83 1453 99 3255 99 8482 99 7372 83 175 I only wish to retain lines where the numbers fullfil two requirements. E.g: =83 1000<=<=2000 To do this I use the following... (10 Replies)
Discussion started by: s052866
10 Replies

7. Shell Programming and Scripting

How to substract selective values in multi row, multi column file (using awk or sed?)

Hi, I have a problem where I need to make this input: nameRow1a,text1a,text2a,floatValue1a,FloatValue2a,...,floatValue140a nameRow1b,text1b,text2b,floatValue1b,FloatValue2b,...,floatValue140b look like this output: nameRow1a,text1b,text2a,(floatValue1a - floatValue1b),(floatValue2a -... (4 Replies)
Discussion started by: nricardo
4 Replies

8. Shell Programming and Scripting

How to make awk command faster?

I have the below command which is referring a large file and it is taking 3 hours to run. Can something be done to make this command faster. awk -F ',' '{OFS=","}{ if ($13 == "9999") print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12 }' ${NLAP_TEMP}/hist1.out|sort -T ${NLAP_TEMP} |uniq>... (13 Replies)
Discussion started by: Peu Mukherjee
13 Replies

9. Shell Programming and Scripting

How to make awk command faster for large amount of data?

I have nginx web server logs with all requests that were made and I'm filtering them by date and time. Each line has the following structure: 127.0.0.1 - xyz.com GET 123.ts HTTP/1.1 (200) 0.000 s 3182 CoreMedia/1.0.0.15F79 (iPhone; U; CPU OS 11_4 like Mac OS X; pt_br) These text files are... (21 Replies)
Discussion started by: brenoasrm
21 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 09:14 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy