Easily solved with -d '\n' for the most part. It does have -d though. Oh, goodie. 300 miles more rope to hang ourselves with. I think you missed my point -- xargs would know the maximum size of args for the system already and split accordingly.
...so the -n999 is redundant.
Shoving the too many args into backticks and for doesn't make too many args not be too many args. You have to do while read FILENAME ; do stuff ; done
Where is -d option ? and all system has it ?
Yep..xargs must know max-args
we don't need xargs because of shell internals is enough
I can all process in for loop but we can use while loop when required control expression
The problem with this is there is no pipeline parallelism, no looping until all the find is done, last dir searched:
so I prefer this:
I use xargs -n999 because in old xargs implementations, it was the only arg to prevent 'dry' calls. Old xargs had a reltively short string buffer and assembled a command line to fit, so the 999 was over the top. I wrote my own xargs, fxargs2, with i/o overlap and every line is always an argument, and it does N args (the argv length - argc - 1) or M bytes (the input buffer with line feeds turned to nulls), whichever runs out first, using static allocation for low overhead.
Is there any system limit on args? If you deliver them by exec*() not system(), which is a flat string shell call, I am not sure there is any limit (and you avoid all quoting issues). Some commands like ls have their own arg limits, so it is not just a system thing.
It's a POSIX option, yes. See man xargs. I didn't need it in my example.
Quote:
we don't need xargs because of shell internals is enough
It's not. Put my infinite print statement into backticks and it'd die from too many arguments. If there wasn't any argument limit, it'd never do anything -- just sit there forever waiting for the input to finish, consuming boundless memory in the process until the system kills it.
Storing everything in one giant variable is fundamentally inefficient. It wastes memory storing things you don't need to store, and wastes time waiting for input to finish -- if indeed it ever finishes, time you could've been using to process what you already have.
Yes, that is why fxargs2 tries to detect when it would block and spin off what it has, whereas xargs probably persists until it fills the memory or hits a max or EOF.
Yes, that is why fxargs2 tries to detect when it would block and spin off what it has, whereas xargs probably persists until it fills the memory or hits a max or EOF.
That's a good idea.
xargs wouldn't fill memory on any system with a sane argument limit though.
Thankyou everyone ..!! I got my result ..!!
I edited my existing script. I created two script... 1st one is for to check the list of big directories and 2nd is to check list of big files. Below is the scripts ..!
1st Script ---- to check big directories
2nd script --- to check list of 200 big files.
Last edited by Scott; 04-19-2011 at 05:15 PM..
Reason: Code tags for quote tags. Please, less formatting (fonts, colours, etc...)
Magic number 101 -- captures 99% of any economy of scale:
The ls option -a on file names does nothing. Nice find's have an internal -ls option (with dev and inode, so the sort key offset changes).
Hello Folks,
On Below Script, I want to apply condition.
Condition that it check the directory for files if not found keep checking.
Once directory have files and founded do the calculation and finish the code.
Script to check the files existence inside a directory, If not then keep... (16 Replies)
Hi,
I require shell script to check for any pending files which are sitting in the particular directory for more than 10 hours.
Please help me on this...Thank you. (5 Replies)
hi,
I am having script in which i want to check if directory has any file in it or not. If directory contains a single or more files then and only then it should proceed to further operations...
P.S.: Directory might have thousand number of files. so there might be chance of getting error... (4 Replies)
I have constant trouble with XCOPY/s for multi-gigabyte transfers.
I need a utility like XCOPY/S that remembers where it left off if I reboot. Is there such a utility? How about a free utility (free as in free beer)?
How about an md5sum sanity check too?
I posted the above query in another... (3 Replies)
Possibly a dumb question, but I'm deciding how I'm going to do this. I'm currently rsyncing a 25TB directory (with several layers of sub directories most of which have video files ranging from 500 megs to 4-5 gigs), from one NAS to another using rsync -av. By the time I need to act ~15TB should... (3 Replies)
Hi,
I've some directory that I used as working directory for a program. At the end of the procedure, the content is deleted. This directory, when I do a ls -l, appears to still take up some space. After a little research, I've seen on a another board of this forum that it's not really taking... (5 Replies)
Hi All,
I have searched this forum for related posts but could not find one that fits mine. I have a shell script which removes all the XML tags including the text inside the tags from some 4 million XML files.
The shell script looks like this (MODIFIED):
find . "*.xml" -print | while read... (6 Replies)
Following on from this post:
https://www.unix.com/shell-programming-scripting/150201-simple-script-mount-folder-all-users-home.html
and getting told off for bumping the thread:(
Please could someone help me with a short script to check is a certain directory is present in /home for all users... (8 Replies)
hi,
i want to write the script to list atleast one file inside that directory
eg:
/home/Log/amp01
/home/log/amp02
.
.
.
/home/log/amp..N
want to see atleast one file inside the /home/log/amp01 .... amp(N)
if it not there.. need to give that no file exists inside... (3 Replies)