08-06-2010
Hi,
Why? What for you count files in directory?
Maybe the evil witch told you?
Is it not enough that there are a lot maybe too much?
You can do you commend "find . -type f | wc -l
at background
Maybe use 'df -i" ,it's very speed. I known it not the same.
Maybe today your command "find .-type f | wc -l" try find in mounted by nfs or other network and network is damaged, of cource you see errors, maybe you redirect errors?
you can see output
"find . -type f "
Maybe bufor output is too small.
"find . -type f > new.log"
and next
wc -l new.log
or view new.log
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
hello
i need help to remove directory . The directory is not empty ., it contains
several sub directories and files inside that..
total number of files in one directory is 12,24,446 .
rm -rf doesnt work . it is prompting for every file ..
i want to delete without prompting and... (6 Replies)
Discussion started by: getdpg
6 Replies
2. UNIX for Dummies Questions & Answers
Hi,
I am relatively new to Unix and trying to understand as much as I can.
I would like to know if it's possible to count the total number of Unix accounts? If so, can the count be done from any working directory or does it have to be specific to where the accounts are based?
Thanks! (4 Replies)
Discussion started by: Trogan
4 Replies
3. Shell Programming and Scripting
what's the script to do that?
i want to only count the number of files in that directory, not including any sub directories at all (5 Replies)
Discussion started by: finalight
5 Replies
4. UNIX for Advanced & Expert Users
Hello All. I am trying to do this from a terminal prompt on my mac....
I have 100 folders all named different things. Those 100 folders are inside ~/Desktop/Pictures directory.
Each of the 100 folders are uniquely named. The image files inside of each folder only have some similarities.
... (1 Reply)
Discussion started by: yoyoyo777
1 Replies
5. UNIX for Dummies Questions & Answers
Hi,
Please let me know how to find out number of files in a directory excluding existing files..The existing file format will be unknown..each time..
Thanks (3 Replies)
Discussion started by: ammu
3 Replies
6. Shell Programming and Scripting
Hello I'm trying to do an exercise programming in bash where I have to get only the store of files in a directory but NOT all capacity of the directory.
I probe with: du -sh "$directory"*` but I get all the capacity and I probe with ls command but I couldnt.
Are there any way to get only files... (2 Replies)
Discussion started by: adiegorpc
2 Replies
7. UNIX for Dummies Questions & Answers
Need a Unix command to save the last 20 versions of a file in a specific directory and delete everything else. Date is irrelevant. Anyone aware of such an animal?
In my test, I came up with:
ls -t1 /tmp/testfile* | tail -n +20 | xargs rm
I don’t quite trust the author though! (1 Reply)
Discussion started by: rwsherman
1 Replies
8. Shell Programming and Scripting
Hi all,
I'm looking over all the internet for finding how to make a count of the files for every directory apart.. not a total result for the xx directories.
For example:
-dir1
file1
file2
-subdir1
-subdir2
file1
-subdir3
file1
-dir2
-dir3
file4
... (9 Replies)
Discussion started by: CODIII
9 Replies
9. UNIX for Dummies Questions & Answers
Hello Is there a way to calculate how many times a particular symbol appeared in a string before a particular word.
Desktop/Myfiles/pet/dog/puppy
So, I want to count number of occurence of"/" in this directory before the word dog lets say.
Cheers,
Bob (3 Replies)
Discussion started by: FUTURE_EINSTEIN
3 Replies
10. Shell Programming and Scripting
Hi expert,
Is there any fastest way to calculate recursive directory, and I have total 600 directories have 100000 files and 10 directory approximately 9000000 - 10000000 each files per directory. currently using this command "du -k --max-depth=0" to get the size but very slow it take 24 hours... (9 Replies)
Discussion started by: rufino
9 Replies
LEARN ABOUT DEBIAN
clfmerge
clfmerge(1) logtools clfmerge(1)
NAME
clfmerge - merge Common-Log Format web logs based on time-stamps
SYNOPSIS
clfmerge [--help | -h] [-b size] [-d] [file names]
DESCRIPTION
The clfmerge program is designed to avoid using sort to merge multiple web log files. Web logs for big sites consist of multiple files in
the >100M size range from a number of machines. For such files it is not practical to use a program such as gnusort to merge the files
because the data is not always entirely in order (so the merge option of gnusort doesn't work so well), but it is not in random order (so
doing a complete sort would be a waste). Also the date field that is being sorted on is not particularly easy to specify for gnusort (I
have seen it done but it was messy).
This program is designed to simply and quickly sort multiple large log files with no need for temporary storage space or overly large buf-
fers in memory (the memory footprint is generally only a few megs).
OVERVIEW
It will take a number (from 0 to n) of file-names on the command line, it will open them for reading and read CLF format web log data from
them all. Lines which don't appear to be in CLF format (NB they aren't parsed fully, only minimal parsing to determine the date is per-
formed) will be rejected and displayed on standard-error.
If zero files are specified then there will be no error, it will just silently output nothing, this is for scripts which use the find com-
mand to find log files and which can't be counted on to find any log files, it saves doing an extra check in your shell scripts.
If one file is specified then the data will be read into a 1000 line buffer and it will be removed from the buffer (and displayed on stan-
dard output) in date order. This is to handle the case of web servers which date entries on the connection time but write them to the log
at completion time and thus generate log files that aren't in order (Netscape web server does this - I haven't checked what other web
servers do).
If more than one file is specified then a line will be read from each file, the file that had the earliest time stamp will be read from
until it returns a time stamp later than one of the other files. Then the file with the earlier time stamp will be read. With multiple
files the buffer size is 1000 lines or 100 * the number of files (whichever is larger). When the buffer becomes full the first line will
be removed and displayed on standard output.
OPTIONS
-b buffer-size
Specify the buffer-size to use, if 0 is specified then it means to disable the sliding-window sorting of the data which improves the
speed.
-d Set domain-name mangling to on. This means that if a line starts with as the name of the site that was requested then that would be
removed from the start of the line and the GET / would be changed to GET http://www.company.com/ which allows programs like Webal-
izer to produce good graphs for large hosting sites. Also it will make the domain name in lower case.
EXIT STATUS
0 No errors
1 Bad parameters
2 Can't open one of the specified files
3 Can't write to output
AUTHOR
This program, its manual page, and the Debian package were written by Russell Coker <russell@coker.com.au>.
SEE ALSO
clfsplit(1),clfdomainsplit(1)
Russell Coker <russell@coker.com.au> 0.06 clfmerge(1)