Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

tiobench(1) [debian man page]

tiobench(1)															       tiobench(1)

NAME
tiobench - Threaded I/O bench SYNOPSIS
tiobench [--help] [--nofrag] [--size SizeInMB [--size ...]] [--numruns NumberOfRuns [--numruns ...]] [--dir TestDir [--dir ...]] [--block BlkSizeInBytes [--block ...]] [--random NumberRandOpsPerThread [--random ...]] [--threads NumberOfThreads [--threads ...]] DESCRIPTION
tiobench is a perl wrapper to tiotest calling it multiple times with varying sets of parameters as instructed. OPTIONS --help Display a brief help and exit. --nofrag Instructs tiobench to pass -W to tiotest so it waits for previous threads to finish before starting a new one in the writing phase. For more info see the -W option in the tiotest(1) manpage. --size SizeInMB The total size in MBytes of the files may use together. If this option is not given, tiobench tries to be smart and figure out a size making sense. --numruns NumberOfRuns This number specifies over how many runs each test should be averaged. Defaults to 1. --dir TestDir The directory in which to test. Defaults to ., the current directory. --block BlkSizeInBytes The blocksize in Bytes to use. Defaults to 4096. --random NumberRandOpsPerThread Random I/O operations per thread. Defaults to 1000. --threads NumberOfThreads The number of concurrent test threads. Defaults to 4. The options --size, --numruns, --dir, --block, --random, and --threads may be given multiple times to cover multiple cases, for instance: tiobench --block 4096 --block 8192 will first run through with a 4KB block size and then again with a 8KB block size. To get usefull results the used file sizes should be a lot larger than the physical amount of memory you have. A good idea is to boot with 16 Megs of RAM (Try passing the "mem=16M" option to the kernel to limit Linux to using a very small amount of memory) and into Single User mode only. SEE ALSO
tiotest(1), bonnie(1), hdparm(8) AUTHOR
tiobench was written by James Manning <jmm@computer.org>. This manual page was written by Peter Palfrader <weasel@debian.org>, for the Debian GNU/Linux system (but may be used by others). Mar-2001 tiobench(1)

Check Out this Related Man Page

bup-damage(1)						      General Commands Manual						     bup-damage(1)

NAME
bup-damage - randomly destroy blocks of a file SYNOPSIS
bup damage [-n count] [-s maxsize] [--percent pct] [-S seed] [--equal] DESCRIPTION
Use bup damage to deliberately destroy blocks in a .pack or .idx file (from .bup/objects/pack) to test the recovery features of bup-fsck(1) or other programs. THIS PROGRAM IS EXTREMELY DANGEROUS AND WILL DESTROY YOUR DATA bup damage is primarily useful for automated or manual tests of data recovery tools, to reassure yourself that the tools actually work. OPTIONS
-n, --num=numblocks the number of separate blocks to damage in each file (default 10). Note that it's possible for more than one damaged segment to fall in the same bup-fsck(1) recovery block, so you might not damage as many recovery blocks as you expect. If this is a problem, use --equal. -s, --size=maxblocksize the maximum size, in bytes, of each damaged block (default 1 unless --percent is specified). Note that because of the way bup- fsck(1) works, a multi-byte block could fall on the boundary between two recovery blocks, and thus damaging two separate recovery blocks. In small files, it's also possible for a damaged block to be larger than a recovery block. If these issues might be a problem, you should use the default damage size of one byte. --percent=maxblockpercent the maximum size, in percent of the original file, of each damaged block. If both --size and --percent are given, the maximum block size is the minimum of the two restrictions. You can use this to ensure that a given block will never damage more than one or two git-fsck(1) recovery blocks. -S, --seed=randomseed seed the random number generator with the given value. If you use this option, your tests will be repeatable, since the damaged block offsets, sizes, and contents will be the same every time. By default, the random numbers are different every time (so you can run tests in a loop and repeatedly test with different damage each time). --equal instead of choosing random offsets for each damaged block, space the blocks equally throughout the file, starting at offset 0. If you also choose a correct maximum block size, this can guarantee that any given damage block never damages more than one git-fsck(1) recovery block. (This is also guaranteed if you use -s 1.) EXAMPLE
# make a backup in case things go horribly wrong cp -a ~/.bup/objects/pack ~/bup-packs.bak # generate recovery blocks for all packs bup fsck -g # deliberately damage the packs bup damage -n 10 -s 1 -S 0 ~/.bup/objects/pack/*.{pack,idx} # recover from the damage bup fsck -r SEE ALSO
bup-fsck(1), par2(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-damage(1)
Man Page

Featured Tech Videos