02-02-2009
Why does the # of blocks change for a file on a ZFS filesystem?
I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points:
> zpool history
History for 'raidpool':
2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0
2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o canmount=on raidpool/vol01
2009-01-15.17:20:13 zfs create -o mountpoint=/vol02 -o sharenfs=on -o canmount=on -o compression=lzjb raidpool/vol02
I did not make the mountpoints (vol01 and vol02) into volumes. I know you can set default blocksizes when you create volumes but you cannot make them exportable NFS exports.
I am assuming that vol01 and vol02 are variable blocksizes because I did not explicitly specify a blocksize. Thus, my assumption is that ZFS would use a blocksize that is the the smallest power of 2 and the smallest blocksize is 512 bytes.
I use the stat command to check the filesize, the blocksize, and the # of blocks.
I created a file that is exactly 512 bytes in size on /vol01 (the one without the LZ compression)
I do the following stat command:
stat --printf "%n %b %B %s %o\n" *
The %b is the number of blocks used.
The number of blocks changes after a few minutes after the file is created:
# stat --printf "%n %b %B %s %o\n" *
file.0 3 512 3 4096
file.512 1 512 512 4096
# stat --printf "%n %b %B %s %o\n" *
file.0 3 512 3 4096
file.512 1 512 512 4096
# stat --printf "%n %b %B %s %o\n" *
file.0 3 512 3 4096
file.512 3 512 512 4096
Why does the # of blocks change after a few minutes? And why are we using 3 blocks when the file is only 512 bytes in size (in other words, only 1 block is needed)???
7 More Discussions You Might Find Interesting
1. Solaris
I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up.
These are the steps I did:
1) Create the zpool using raidz1 across five disks.
I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)
Discussion started by: sqa777
3 Replies
2. Solaris
Hey all,
I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz.
So, after installing Opensolaris on the OS drives, how can I remount the storage raid?
TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies
3. Filesystems, Disks and Memory
Hi Folks,
Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact.
All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies
4. Shell Programming and Scripting
Hello,
I have a file like this:
FILE.TXT:
(define argc :: int)
(assert ( > argc 1))
(assert ( = argc 1))
<check>
#
(define c :: float)
(assert ( > c 0))
(assert ( = c 0))
<check>
#
now, i want to separate each block('#' is the delimeter), make them separate files, and then send them as... (5 Replies)
Discussion started by: paramad
5 Replies
5. Solaris
Hi,
Recently we have new server T5 Oracle. We set up it for our database. For out database files we set one zfs filesystem. When i use iostat -xc the output as below. As you see the value for vdc4 is quite high.
extended device statistics cpu
device ... (32 Replies)
Discussion started by: tharmendran
32 Replies
6. Solaris
Hello,
Need to ask the question regarding extending the zfs storage file system.
currently after using the command, df -kh
u01-data-pool/data 600G 552 48G 93% /data
/data are only 48 gb remaining and it has occupied 93% for total storage.
zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies
7. UNIX for Beginners Questions & Answers
I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk.
what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies
LEARN ABOUT DEBIAN
amzfs-sendrecv
AMZFS-SENDRECV(8) System Administration Commands AMZFS-SENDRECV(8)
NAME
amzfs-sendrecv - Amanda script to create zfs sendrecv
DESCRIPTION
amzfs-sendrecv is an Amanda application implementing the Application API. It should not be run by users directly. It create a zfs snapshot
of the filesystem and backup the snapshot with 'zfs send'. Snapshot are kept after the backup is done, this increase the disk space use on
the client but it is neccesary to be able do to incremental backup. If you want only full backup, you can disable this feature by setting
the KEEP-SNAPSHOT property to 'NO'. Only the restoration of the complete backup is allowed, it is impossible to restore a single file.
The application is run as the amanda user, it must have many zfs priviledge:
zfs allow -ldu AMANDA_USER mount,create,rename,snapshot,destroy,send,receive FILESYSTEM
Some system doesn't have "zfs allow", but you can give the Amanda backup user the rights to manipulate ZFS filesystems by using the
following command:
usermod -P "ZFS File System Management,ZFS Storage Management" AMANDA_USER
This will require that your run zfs under pfexec, set the PFEXEC property to YES.
The format of the diskdevice in the disklist (DLE) must be one of:
Desciption Example
---------- -------
Mountpoint /data
ZFS pool name datapool
ZFS filesystem datapool/database
ZFS logical volume datapool/dbvol
The filesystem doesn't need to be mounted.
PROPERTIES
This section lists the properties that control amzfs-sendrecv's functionality. See amanda-applications(7) for information on the
Application API, application configuration.
DF-PATH
Path to the 'df' binary, search in $PATH by default.
KEEP-SNAPSHOT
If "YES" (the default), snapshot are kept after the backup, if set to "NO" then snapshot are no kept and incremental backup will fail.
ZFS-PATH
Path to the 'zfs' binary, search in $PATH by default.
PFEXEC-PATH
Path to the 'pfexec' binary, search in $PATH by default.
PFEXEC
If "NO" (the default), pfexec is not used, if set to "YES" then pfexec is used.
EXAMPLE
In this example, a dumptype is defined to use amzfs-sendrecv application to backup a zfs filesystem.
define application-tool amzfs_sendrecv {
comment "amzfs-sendrecv"
plugin "amzfs-sendrecv"
#property "DF-PATH" "/usr/sbin/df"
#property "KEEP-SNAPSHOT" "YES"
#property "ZFS-PATH" "/usr/sbin/zfs"
#property "PFEXEC-PATH" "/usr/sbin/pfexec"
#property "PFEXEC" "NO"
}
define dumptype user-zfs-sendrecv {
program "APPLICATAION"
application "amzfs_sendrecv"
}
SEE ALSO
amanda(8), amanda.conf(5), amanda-client.conf(5), amanda-applications(7)
The Amanda Wiki: : http://wiki.zmanda.com/
AUTHOR
Jean-Louis Martineau <martineau@zmanda.com>
Zmanda, Inc. (http://www.zmanda.com)
Amanda 3.3.1 02/21/2012 AMZFS-SENDRECV(8)