The following is probably more a workaround then a solution:
You said that you have no problems with small files but ony big files. In addition, i get from your wording that you don't need real-time exactness because you will poll the data only once in a while. So, why not set up a small cron job that copies the last line of the big log into a small file, each time overwriting the old one, like this sketch script:
Then you can query this new file with your Zabbix methods because it always contain one line only.
Hi,
I have gps receiver log..its giving readings .like below
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GPSD,R=1
$GPGSV,3,1,11,08,16,328,40,11,36,127,00,28,33,283,39,20,11,165,00*71... (3 Replies)
Hello,
I try to write a shell script that would list all files on a directory and stop when it finds the first item specified on a find or ls command.
How can I tell to the find or ls command to stop when it finds the first ".doc" file for example ?
Thank you (7 Replies)
Hi,
I have a file generated like this -
1. Fire SQL and store the formatted output in a temp file
echo "select path, empid, age from emp_tbl" | /usr/sql emp_db 2 > count_file | grep vol > tempFile
2. The tempFile looks like this after the above statement
/vol/emp1 0732 ... (9 Replies)
Dear all,
I have encountered some problem here. I prompt the user for input and store it into a data file, eg. key in name and marks so the data file will look like this
andrew 80
ben 75
and the next input is carine 90. So the problem here is i want to print... (2 Replies)
I have a LOG file which looks like this
Import started at: Mon Jul 23 02:13:01 EDT 2012
Initialization completed in 2.146 seconds.
--------------------------------------------------------------------------------
--
Import summary for Import item: PolicyInformation... (8 Replies)
Hi,
I want to read a live log file line by line and considering those line which start from time stamp;
Below code I am using, which read line but throws an exception when comparing line that does not contain error code
tail -F /logs/COMMON-ERROR.log | while read myline; do... (2 Replies)
Hello,
I have some tab delimited text data,
file: final_temp1
aname val
NAME;r'(1,) 3.28584
r'(2,)<tab>
NAME;r'(3,) 6.13003
NAME;r'(4,) 4.18037
r'(5,)<tab>
You can see that the data is incomplete in some cases. There is a trailing tab after the first column for each incomplete row. I... (2 Replies)
Hello,
I have a src code file where I need to uncomment many lines.
The lines I need to uncomment look like,
C CALL l_r(DESNAME,DESOUT, 'Gmax', ESH(10), NO_APP, JJ)
The comment is the "C" in the first column. This needs to be deleted so that there are 6 spaces preceding "CALL".... (7 Replies)
Discussion started by: LMHmedchem
7 Replies
LEARN ABOUT DEBIAN
genbackupdata
GENBACKUPDATA(1) General Commands Manual GENBACKUPDATA(1)NAME
genbackupdata - generate backup test data
SYNOPSIS
genbackupdata [--chunk-size=SIZE] [--config=FILE] [-c=SIZE] [--create=SIZE] [--depth=DEPTH] [--dump-config] [--dump-setting-names]
[--file-size=SIZE] [--generate-manpage=TEMPLATE] [-h] [--help] [--list-config-files] [--log=FILE] [--log-keep=N] [--log-level=LEVEL]
[--log-max=SIZE] [--max-files=MAX-FILES] [--no-default-configs] [--output=FILE] [--quiet] [--seed=SEED] [--version]
DESCRIPTION
genbackupdata generates test data sets for performance testing of backup software. It creates a directory tree filled with files of dif-
ferent sizes. The total size and the distribution of sizes between small and big are configurable. The program can also modify an exist-
ing directory tree by creating new files, and deleting, renaming, or modifying existing files. This can be used to generate test data for
successive generations of backups.
The program is deterministic: with a given set of parameters (and a given pre-existing directory tree), it always creates the same output.
This way, it is possible to reproduce backup tests exactly, without having to distribute the potentially very large test sets.
The data set consists of plain files and directories. Files are either small text files or big binary files. Text files contain the
"lorem ipsum" stanza, binary files contain randomly generated byte streams. The percentage of file data that is small text or big binary
files can be set, as can the sizes of the respective file types.
Files and directories are named "fileXXXX" or "dirXXXX", where "XXXX" is a successive integer, separate successions for files and directo-
ries. There is an upper limit to how many files a directory may contain. After the file limit is reached, a new sub-directory is created.
The first set of files go into the root directory of the test set.
You have to give one of the options --create, --delete, --rename, or --modify for the program to do anything. You can, however, give more
than one of them, if DIR already exists. (Giving the same option more than once means that only the last instance is counted.) (DIR) is
created if it doesn't exist already.
OPTIONS --chunk-size=SIZE
generate data in chunks of this size (default: 16384)
--config=FILE
add FILE to config files
-c, --create=SIZE
how much data to create (default: 0)
--depth=DEPTH
depth of directory tree (default: 3)
--dump-config
write out the entire current configuration
--dump-setting-names
write out all names of settings and quit
--file-size=SIZE
size of one file (default: 16384)
--generate-manpage=TEMPLATE
fill in manual page TEMPLATE
-h, --help
show this help message and exit
--list-config-files
list all possible config files
--log=FILE
write log entries to FILE
--log-keep=N
keep last N logs (10)
--log-level=LEVEL
log at LEVEL, one of debug, info, warning, error, critical, fatal (default: debug)
--log-max=SIZE
rotate logs larger than SIZE, zero for never (default: 0)
--max-files=MAX-FILES
max files/dirs per dir (default: 128)
--no-default-configs
clear list of configuration files to read
--output=FILE
write output to FILE, instead of standard output
--quiet
do not report progress
--seed=SEED
seed for random number generator (default: 0)
--version
show program's version number and exit
EXAMPLES
Create data for the first generation of a backup:
genbackupdata --create=10G testdir
Modify an existing set of backup data to create a new generation:
genbackupdata -c 5% -d 2% -m 5% -r 0.5% testdir
The above command can be run for each new generation.
GENBACKUPDATA(1)