Hi,
I have a file name, for which I want to strip out the first bit and leave the rest...
So I want to take the file name .lockfile-filename.10001 ,strip it and have only filename.10001 ...
Thanking you all inadvance,
Zak (6 Replies)
I need to get a section of a file based on 2 params. I want the part of the file between param 1 & 2. I have tried a bunch of ways and just can't seem to get it right. Can someone please help me out.....its much appreciated. Here is what I have found that looks like what I want....but doesn't... (12 Replies)
Hi,
I have tried many times to add the string into the first line of the file or the middle of the file but could not find the solution.
I first tried by
$echo "paki" >> file
This code only append paki string at the end of file "file" but how can i add this "paki" into the first line or... (5 Replies)
Hello ,
I have huge file with below content. I need to read the numeric values with in the paranthesis after = sign. Please help me with awk and sed script for it.
11.10.2009 04:02:47 Customer login not found: identifier=(0748502889) prefix=(TEL) serviceCode=().
11.10.2009 04:03:12... (13 Replies)
Hello
How do i check that correct input files are used while using AWk and SED for file manipulation?
e.g
awk '/bin/ {print $0 }' shell.txt
sed 's/hp/samsung/' printers.txt
how do i ensure that the correct input files I am working with are used? (5 Replies)
Is there an awk, sed, vi or any line command that adds Field Separators (default spaces) to each line in a file?
$cat RegionalData
12FC2525MZLP8266900216
12FC2525MZLP8266900216
12FC2525NBLP8276900216
12FC2525NBLP8276900216
Desired results:
1 2 F C 2525 MZ LP 826 690 02 16
1 2 F C... (2 Replies)
Ok, so I have a bash script with an embedded expect statement.
Inside of the expect statement, i'm trying to pull all of the non-comment lines from the /etc/oratab file one at a time.
Here's my command:
cat /etc/oratab |sed /^s*#/d\ | awk 'NR==1'|awk -F: '{print \"$1\"}'|. oraenv
Now,... (0 Replies)
Hi All, Need Suggestion, Want to sort a file using awk & sed to get required, output as below, such that each LUN shows correct WWPN and FA port Numbers correctly:
Required output:
01FB 10000000c97843a2 8C 0
01FB 10000000c96fb279 9C 0
22AF 10000000c97843a2 8C 0
22AF 10000000c975adbd ... (10 Replies)
Hello,
Beginning with shell scipting, I'm trying to find in a csv file, the lines where the field related to hostname is displayed as an FQDN intead the hostname. (some lines are correct) and the to correct that inside the file:
Novell,11.0,UNIX Server,bscpsiws02,TxffnX1tX1HiDoyBerrzWA==... (2 Replies)
Guys, I have a variable in a script that I want to transform to into something else Im hoping you guys can help. It doesn't have to use sed/awk but I figured these would be the simplest.
DATE=20160120
I'd like to transform $DATE into "01-20-16" and move it into a new variable called... (8 Replies)
Discussion started by: dendenyc
8 Replies
LEARN ABOUT DEBIAN
clfmerge
clfmerge(1) logtools clfmerge(1)NAME
clfmerge - merge Common-Log Format web logs based on time-stamps
SYNOPSIS
clfmerge [--help | -h] [-b size] [-d] [file names]
DESCRIPTION
The clfmerge program is designed to avoid using sort to merge multiple web log files. Web logs for big sites consist of multiple files in
the >100M size range from a number of machines. For such files it is not practical to use a program such as gnusort to merge the files
because the data is not always entirely in order (so the merge option of gnusort doesn't work so well), but it is not in random order (so
doing a complete sort would be a waste). Also the date field that is being sorted on is not particularly easy to specify for gnusort (I
have seen it done but it was messy).
This program is designed to simply and quickly sort multiple large log files with no need for temporary storage space or overly large buf-
fers in memory (the memory footprint is generally only a few megs).
OVERVIEW
It will take a number (from 0 to n) of file-names on the command line, it will open them for reading and read CLF format web log data from
them all. Lines which don't appear to be in CLF format (NB they aren't parsed fully, only minimal parsing to determine the date is per-
formed) will be rejected and displayed on standard-error.
If zero files are specified then there will be no error, it will just silently output nothing, this is for scripts which use the find com-
mand to find log files and which can't be counted on to find any log files, it saves doing an extra check in your shell scripts.
If one file is specified then the data will be read into a 1000 line buffer and it will be removed from the buffer (and displayed on stan-
dard output) in date order. This is to handle the case of web servers which date entries on the connection time but write them to the log
at completion time and thus generate log files that aren't in order (Netscape web server does this - I haven't checked what other web
servers do).
If more than one file is specified then a line will be read from each file, the file that had the earliest time stamp will be read from
until it returns a time stamp later than one of the other files. Then the file with the earlier time stamp will be read. With multiple
files the buffer size is 1000 lines or 100 * the number of files (whichever is larger). When the buffer becomes full the first line will
be removed and displayed on standard output.
OPTIONS -b buffer-size
Specify the buffer-size to use, if 0 is specified then it means to disable the sliding-window sorting of the data which improves the
speed.
-d Set domain-name mangling to on. This means that if a line starts with as the name of the site that was requested then that would be
removed from the start of the line and the GET / would be changed to GET http://www.company.com/ which allows programs like Webal-
izer to produce good graphs for large hosting sites. Also it will make the domain name in lower case.
EXIT STATUS
0 No errors
1 Bad parameters
2 Can't open one of the specified files
3 Can't write to output
AUTHOR
This program, its manual page, and the Debian package were written by Russell Coker <russell@coker.com.au>.
SEE ALSO clfsplit(1),clfdomainsplit(1)Russell Coker <russell@coker.com.au> 0.06 clfmerge(1)