If you'd accept a trailing comma (removal would need additional measures), set the output record separator to comma: ORS=",". As ALL info would come in a long line, then, we need to find out how to separate a single machine's data from the next. I used the begin of a HTML doc for this. Try adding the following to your script
Please be aware that any comma INSIDE fields will lead to misinterpretation if the result is read somewhere else based on comma separated fields.
Hi all,
Still a newbie and learning as I go ... as you do :)
Have created this script to report on disc usage and I've just included the ChkSpace function this morning.
It's the first time I've read a file (line-by-bloody-line) and would like to know if I can improve this script ?
FYI - I... (11 Replies)
Hi ,
i'm searching for files over many Aix servers with rsh command using this request :
find /dir1 -name '*.' -exec ls {} \;
and then count them with "wc"
but i would improve this search because it's too long and replace directly find with ls command but "ls *. " doesn't work.
and... (3 Replies)
Wrote this script to find the date x days before or after today. Is there any way that this script can be speeded up or otherwise improved?
#!/usr/bin/sh
check_done() {
if
then
daysofmth=31
elif
then
if
... (11 Replies)
hi someone tell me which ways i can improve disk I/O and system process performance.kindly refer some commands so i can do it on my test machine.thanks, Mazhar (2 Replies)
I have a data file of 2 gig
I need to do all these, but its taking hours, any where i can improve performance, thanks a lot
#!/usr/bin/ksh
echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')"
function showHelp {
cat << EOF >&2
syntax extreme.sh FILENAME
Specify filename to parse
EOF... (3 Replies)
I have a 10Gbps network link connecting two machines A and B. I want to transfer 20GB data from A to B using TCP. With default setting, I can use 50% bandwidth. How to improve the throughput? Is there any way to make throughput as close to 10Gbps as possible? thanks~ :) (3 Replies)
Hi All,
I have written a script as follows which is taking lot of time in executing/searching only 3500 records taken as input from one file in log file of 12 GB Approximately.
Working of script is read the csv file as an input having 2 arguments which are transaction_id,mobile_number and search... (6 Replies)
I just wrote a very small script that improves readability on system sulog. The problem with all sulog is there is lack of clarity whether the info you are looking at is the most current. So if you just need a simple soution instead of going thru the trouble of writing a script that rotate logs and... (0 Replies)
Gents.
I have 2 different scripts for the same purpose:
raw2csv_1
Script raw2csv_1 finish the process in less that 1 minute
raw2csv_2
Script raw2csv_2 finish the process in more that 6 minutes.
Can you please check if there is any option to improve the raw2csv_2. To finish the job... (4 Replies)
Gents,
Is there the possibility to improve this script to be able to have same output information.
I did this script, but I believe there is a very short code to get same output
here my script
awk -F, '{if($10>0 && $10<=15) print $6}' tmp1 | sort -k1n | awk '{a++} END { for (n in a )... (23 Replies)
Discussion started by: jiam912
23 Replies
LEARN ABOUT SUSE
dbfdump
DBFDUMP(1) User Contributed Perl Documentation DBFDUMP(1)NAME
dbfdump - Dump the record of the dbf file
FORMAT
dbfdump [options] files
where options are
--rs output record separator (default newline)
--fs output field separator (default colon)
--fields comma separated list of fields to print (default all)
--undef string to print for NULL values (default empty)
--memofile specifies unstandard name of attached memo file
--memosep separator for dBase III dbt's (default x1ax1a)
--nomemo do not try to read the memo (dbt/fpt) file
--info print info about the file and fields
with additional --SQL parameter, outputs the SQL create table
--version print version of the XBase library
--table output in nice table format (only available when
Data::ShowTable is installed, overrides rs and fs)
SYNOPSIS
dbfdump -fields id,msg table.dbf
dbfdump -fs=' : ' table
dbfdump --nomemo file.dbf
ssh user@host 'cat file.dbf.gz' | gunzip - | dbfdump -
DESCRIPTION
Dbfdump prints to standard output the content of dbf files listed. By default, it prints all fields, separated by colons, one record on a
line. The output record and column separators can be changed by switches on the command line. You can also ask only for some fields to be
printed.
The content of associated memo files (dbf, fpt) is printed for memo fields, unless you use the "--nomemo" option.
You can specify reading the standard input by putting dash (-) instead of file name.
AUTHOR
(c) 1998--1999 Jan Pazdziora, adelton@fi.muni.cz, http://www.fi.muni.cz/~adelton/ at Faculty of Informatics, Masaryk University in Brno,
Czech Republic
SEE ALSO perl(1); XBase(3)perl v5.12.1 2010-07-05 DBFDUMP(1)