Sponsored Content
Operating Systems Linux Red Hat Linux / tape Block size issue Post 302073802 by rcarpent on Wednesday 17th of May 2006 05:44:31 PM
Old 05-17-2006
Linux / tape Block size issue

Can anyone tell me how to read and change the tape block size attached to a ultrium 2 (Faststor-2 tapedrive). I am using Redhat Linux AS 3.0 Update 7. Smilie

I had a problem doing a recover with our backup software which is AS 3.0 Update 7. The message I received was:

Failed to read 65536 byte block with 32768 byte read. I was told by Networker support to this was a linux issue. so here I am ???

1) How can I read the block size in linux and change it?
2) Where would I change the block size? Would it be on the controller?


Thanks in Advance
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

question on du for dire. size on tape and drive

Is there a method to see how much data is put on a tape? Or to check and see the size of what you are tarring to tape? I have a script that checks a partition and user's directories and excludes anything under a home/test directory and writes that to a text file, then I call tar and using that... (1 Reply)
Discussion started by: kymberm
1 Replies

2. AIX

block size for tape

hello I have a save cartridge (backup) but i don't known what block size has been used with this cartridge. how can i find the block size of this cartridge to restore it ? thank you (2 Replies)
Discussion started by: pascalbout
2 Replies

3. UNIX for Advanced & Expert Users

How to check the size of the backup from the tape?

Hi, How can I check the size of the databse/filesystem in the tape (4MM or 8MM)? For example, I want to restore some databse/filesystem backup to the system. Before that I wanna check the size of that from the tape.Please help me regarding..... Regards, Sharif. (0 Replies)
Discussion started by: sharif
0 Replies

4. UNIX for Dummies Questions & Answers

tape backup size

How can I tell how much tape is left (or how much tape has been used) after doing a backup? My system is on Solaris 5.8. (2 Replies)
Discussion started by: shorty
2 Replies

5. Solaris

Configuration issue with Tape Library in Solaris 10

Env: Server : Sparc Ultra-80 OS: Solaris 10 Direct SCSI connection to Library. Terminator connected at the Tape End. Problem: Unable to see any device file related to Robotic /dev/rsst0. Unable to see any device file /dev/rmt. ------------ Messages while the machine boots sst0: No... (3 Replies)
Discussion started by: amqstam
3 Replies

6. Solaris

Tape drive config issue

I have a server/domain on a m5000 running Solaris 10. It is part of a cluster. The other cluster member sees tape drives, but this one does not. It is zoned correctly, and I can see the drives are binded in lputil. The st.conf, and devlink.tab are identical. ST.CONF: - # # Copyright... (2 Replies)
Discussion started by: pfwhufc
2 Replies

7. Solaris

Backup Tape Size

Hello guys anyone know how to check the size of files in the a backup tape?:confused::confused::confused: (2 Replies)
Discussion started by: Mohammad.ak
2 Replies

8. AIX

Issue about PP size

Hello everyone, I have many VGs on my system. I have created them whith the same command. I haven't specified the PP size while creating them. Logically, the PP size should be the same (default value) in all the VGs. However, I have different PP size : 32 MB, 256 MB, 64 MB and 128 MB. Can... (1 Reply)
Discussion started by: adilyos
1 Replies

9. UNIX for Advanced & Expert Users

Physical disk IO size smaller than fragment block filesystem size ?

Hello, in one default UFS filesystem we have 8K block size (bsize) and 1K fragmentsize (fsize). At this scenary I thought all "FileSytem IO" will be 8K (or greater) but never smaller than the fragment size (1K). If a UFS fragment/blocksize is allwasy several ADJACENTS sectors on disk (in a ... (4 Replies)
Discussion started by: rarino2
4 Replies

10. HP-UX

About Block Size and Fragment Size

Accordingly a lot of manuals - if you have block size 8KB and trying to write a 1KB file to the block, as result you waste 7KB of the block space. But recently I noticed about Fragments of File Block. In same case if you have File Block 8KB and Fragment size 1KB - you can save your block space,... (6 Replies)
Discussion started by: jess_t03
6 Replies
bup-damage(1)						      General Commands Manual						     bup-damage(1)

NAME
bup-damage - randomly destroy blocks of a file SYNOPSIS
bup damage [-n count] [-s maxsize] [--percent pct] [-S seed] [--equal] DESCRIPTION
Use bup damage to deliberately destroy blocks in a .pack or .idx file (from .bup/objects/pack) to test the recovery features of bup-fsck(1) or other programs. THIS PROGRAM IS EXTREMELY DANGEROUS AND WILL DESTROY YOUR DATA bup damage is primarily useful for automated or manual tests of data recovery tools, to reassure yourself that the tools actually work. OPTIONS
-n, --num=numblocks the number of separate blocks to damage in each file (default 10). Note that it's possible for more than one damaged segment to fall in the same bup-fsck(1) recovery block, so you might not damage as many recovery blocks as you expect. If this is a problem, use --equal. -s, --size=maxblocksize the maximum size, in bytes, of each damaged block (default 1 unless --percent is specified). Note that because of the way bup- fsck(1) works, a multi-byte block could fall on the boundary between two recovery blocks, and thus damaging two separate recovery blocks. In small files, it's also possible for a damaged block to be larger than a recovery block. If these issues might be a problem, you should use the default damage size of one byte. --percent=maxblockpercent the maximum size, in percent of the original file, of each damaged block. If both --size and --percent are given, the maximum block size is the minimum of the two restrictions. You can use this to ensure that a given block will never damage more than one or two git-fsck(1) recovery blocks. -S, --seed=randomseed seed the random number generator with the given value. If you use this option, your tests will be repeatable, since the damaged block offsets, sizes, and contents will be the same every time. By default, the random numbers are different every time (so you can run tests in a loop and repeatedly test with different damage each time). --equal instead of choosing random offsets for each damaged block, space the blocks equally throughout the file, starting at offset 0. If you also choose a correct maximum block size, this can guarantee that any given damage block never damages more than one git-fsck(1) recovery block. (This is also guaranteed if you use -s 1.) EXAMPLE
# make a backup in case things go horribly wrong cp -a ~/.bup/objects/pack ~/bup-packs.bak # generate recovery blocks for all packs bup fsck -g # deliberately damage the packs bup damage -n 10 -s 1 -S 0 ~/.bup/objects/pack/*.{pack,idx} # recover from the damage bup fsck -r SEE ALSO
bup-fsck(1), par2(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-damage(1)
All times are GMT -4. The time now is 02:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy