12-06-2012
Many thanks. Am out at present. Will run the perl script and get back to you.
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hi - I tried to remove ^M in a delimited file using "tr -d "\r" and "sed 's/^M//g'", but it does not work quite well. While the ^M is removed, the format of the record is still cut in half, like
a,b, c
c,d,e
The delimited file is generated using sh script by outputing a SQL query result to... (7 Replies)
Discussion started by: sirahc
7 Replies
2. Shell Programming and Scripting
Hi Experts
I am very new to perl and need to make a script using perl.
I would like to remove blanks in a text tab delimited file in in a specfic column range ( colum 21 to column 43) sample input and output shown below :
Input:
117 102 650 652 654 656
117 93 95... (3 Replies)
Discussion started by: Faisal Riaz
3 Replies
3. Shell Programming and Scripting
Hey there - a bit of background on what I'm trying to accomplish, first off. I am trying to load the data from a pipe delimited file into a database. The loading tool that I use cannot handle embedded newline characters within a field, so I need to scrub them out.
Solutions that I have tried... (7 Replies)
Discussion started by: bbetteridge
7 Replies
4. Shell Programming and Scripting
I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record.
Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies
5. Shell Programming and Scripting
Hi All
I wanted to know how to effectively delete some columns in a large tab delimited file.
I have a file that contains 5 columns and almost 100,000 rows
3456 f g t t
3456 g h
456 f h
4567 f g h z
345 f g
567 h j k lThis is a very large data file and tab delimited.
I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies
6. Shell Programming and Scripting
Since there are approximately 75K gsfiles and hundreds of stfiles per gsfile, this script can take hours. How can I rewrite this script, so that it's much faster? I'm not as familiar with perl but I'm open to all suggestions.
ls file.list>$split
for gsfile in `cat $split`;
do
csplit... (17 Replies)
Discussion started by: verge
17 Replies
7. Shell Programming and Scripting
Hi,
I have the following command in place
nawk -F, '!a++' file > file.uniq
It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error:
bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies
8. Shell Programming and Scripting
I am working on a homonym dictionary of names i.e. names which are clustered together according to their “sound-alike” pronunciation:
An example will make this clear:
Since the dictionary is manually constructed it often happens that inadvertently two sets of “homonyms” which should be grouped... (2 Replies)
Discussion started by: gimley
2 Replies
9. UNIX for Advanced & Expert Users
I have a file size is around 24 G with 14 columns, delimiter with "|"
My requirement- can anyone provide me the fastest and best to get the below results
Number of records of the file
First column and second Column- Unique counts
Thanks for your time
Karti
------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies
10. Shell Programming and Scripting
I have a large file 1.5 gb and want to sort the file.
I used the following AWK script to do the job
!x++
The script works but it is very slow and takes over an hour to do the job. I suspect this is because the file is not sorted.
Any solution to speed up the AWk script or a Perl script would... (4 Replies)
Discussion started by: gimley
4 Replies
LEARN ABOUT DEBIAN
make_combined_log2
make_combined_log.pl(1) General Commands Manual make_combined_log.pl(1)
NAME
make_combined_log.pl - make combined logfile from SQL database
SYNOPSIS
make_combined_log.pl <days> <virtual host>
DESCRIPTION
This perl script extracts the httpd access data from a MySQL database and formats it properly for parsing by 3rd-party log analysis tools.
The script is intended to be run out by cron. Its commandline arguments tell it how many days' worth of access records to extract, and
which virtual_host you are interested in (because many people log several virthosts to one MySQL db.) This permits you to run it daily,
weekly, every 9 days -- whatever you decide.
NOTE
By "days" I mean "chunks of 24 hours prior to the moment this script is run." So if you run it at 4:34 p.m. on the 12th, it will go back
through 4:34 p.m. on the 11th.
KNOWN ISSUES
Because GET and POST are not discriminated in the MySQL log, we'll just assume that all requests are GETs. This should have negligible
effect on any analysis software. This could be remedied IF you stored the full HTTP request in your database instead of just the URI, but
that's going to cost you a LOT of space really quickly...
Because this is somewhat of a quick hack it doesn't do the most robust error checking in the world. Run it by hand to confirm your usage
before putting it in crontab.
AUTHOR
Edward Rudd <eddie@omegaware.com>
MAN PAGE CREATED BY
Michael A. Toth <lirul.lists@gmail.com> - based on comments of script
COMMENTS
This man page was written using xml2man (1) by the same author.
Manuals User make_combined_log.pl(1)