Hellow i have a large number of files that i want to concatenate to one. these files start with the word 'VOICE_' for example
VOICE_0000000000
VOICE_1223o23u0
VOICE_934934927349
I use the following code:
And i get an error that files are too long!
How can i overcome this?
Best Regards,
Christos
Last edited by vbe; 11-16-2010 at 10:56 AM..
Reason: code tags please
I have a task to move more than 35000 files every two hours, from the same directory to another directory based on a file that has the list of filenames
I tried the following logics
(1)
find . -name \*.dat > list
for i in `cat list` do mv $i test/ done
(2)
cat list|xargs -i mv "{}"... (7 Replies)
I have a large Filesystem on an AIX server and another one on a Red Hat box. I have syncd the two filesystems using rsysnc.
What Im looking for is a script that would compare to the two filesystems to make sure the bits match up and the number of files match up.
its around 2.8 million... (5 Replies)
Hi
I am new to shell scripting.I want to create a batch file which creates a desired number of files with a specific size say 1MB each to consume space.How can i go about it using for loop /any other loop condition using shell script?
Thanks (3 Replies)
Hi,
i have more than 1000 data files(.txt) like this
first file format:
178.83 554.545
179.21 80.392
second file:
178.83 990.909
179.21 90.196
etc.
I want to combine them to the following format:
178.83,554.545,990.909,...
179.21,80.392,90.196,... (7 Replies)
I want to tar large number of files about 150k.
i am using the find command as below to create a file with all file names.
& then trying to use the tar -I command as below.
# find . -type f -name "gpi*" > include-file
# tar -I include-file -cvf newfile.tar
This i got from one of the posts... (2 Replies)
Hi. I need to delete a large number of files listed in a txt file. There are over 90000 files in the list. Some of the directory names and some of the file names do have spaces in them.
In the file, each line is a full path to a file:
/path/to/the files/file1
/path/to/some other/files/file 2... (4 Replies)
Hi All,
I have searched this forum for related posts but could not find one that fits mine. I have a shell script which removes all the XML tags including the text inside the tags from some 4 million XML files.
The shell script looks like this (MODIFIED):
find . "*.xml" -print | while read... (6 Replies)
Hi,
I have a large number of subdirectories (>200), and in each of these directories there is a file with a name like "opp1234.dat".
I'd like to know how I could change the names of these files to say "out.dat" in all these subdirectories in one go.
Thanks! (5 Replies)
Want to sftp large number of files ... approx 150 files will come to server every minute. (AIX box)
Also need make sure file has been sftped successfully...
Please let me know :
1. What is the best / faster way to transfer files?
2. should I use batch option -b so that connectivity will be... (3 Replies)
Hi All,
I am having a situation now to delete a huge number of temp files created during run times approx. 16700+ files. We have never imagined that we will get this this much big list of files during run time. It worked fine for lesser no of files in the list. But when list is huge we are... (7 Replies)
Discussion started by: mad man
7 Replies
LEARN ABOUT DEBIAN
clfmerge
clfmerge(1) logtools clfmerge(1)NAME
clfmerge - merge Common-Log Format web logs based on time-stamps
SYNOPSIS
clfmerge [--help | -h] [-b size] [-d] [file names]
DESCRIPTION
The clfmerge program is designed to avoid using sort to merge multiple web log files. Web logs for big sites consist of multiple files in
the >100M size range from a number of machines. For such files it is not practical to use a program such as gnusort to merge the files
because the data is not always entirely in order (so the merge option of gnusort doesn't work so well), but it is not in random order (so
doing a complete sort would be a waste). Also the date field that is being sorted on is not particularly easy to specify for gnusort (I
have seen it done but it was messy).
This program is designed to simply and quickly sort multiple large log files with no need for temporary storage space or overly large buf-
fers in memory (the memory footprint is generally only a few megs).
OVERVIEW
It will take a number (from 0 to n) of file-names on the command line, it will open them for reading and read CLF format web log data from
them all. Lines which don't appear to be in CLF format (NB they aren't parsed fully, only minimal parsing to determine the date is per-
formed) will be rejected and displayed on standard-error.
If zero files are specified then there will be no error, it will just silently output nothing, this is for scripts which use the find com-
mand to find log files and which can't be counted on to find any log files, it saves doing an extra check in your shell scripts.
If one file is specified then the data will be read into a 1000 line buffer and it will be removed from the buffer (and displayed on stan-
dard output) in date order. This is to handle the case of web servers which date entries on the connection time but write them to the log
at completion time and thus generate log files that aren't in order (Netscape web server does this - I haven't checked what other web
servers do).
If more than one file is specified then a line will be read from each file, the file that had the earliest time stamp will be read from
until it returns a time stamp later than one of the other files. Then the file with the earlier time stamp will be read. With multiple
files the buffer size is 1000 lines or 100 * the number of files (whichever is larger). When the buffer becomes full the first line will
be removed and displayed on standard output.
OPTIONS -b buffer-size
Specify the buffer-size to use, if 0 is specified then it means to disable the sliding-window sorting of the data which improves the
speed.
-d Set domain-name mangling to on. This means that if a line starts with as the name of the site that was requested then that would be
removed from the start of the line and the GET / would be changed to GET http://www.company.com/ which allows programs like Webal-
izer to produce good graphs for large hosting sites. Also it will make the domain name in lower case.
EXIT STATUS
0 No errors
1 Bad parameters
2 Can't open one of the specified files
3 Can't write to output
AUTHOR
This program, its manual page, and the Debian package were written by Russell Coker <russell@coker.com.au>.
SEE ALSO clfsplit(1),clfdomainsplit(1)Russell Coker <russell@coker.com.au> 0.06 clfmerge(1)