Sponsored Content
Full Discussion: ZFS Filesystem
Operating Systems Solaris ZFS Filesystem Post 302952867 by tharmendran on Sunday 23rd of August 2015 11:40:54 PM
Old 08-24-2015
ZFS Filesystem

Hi,
Recently we have new server T5 Oracle. We set up it for our database. For out database files we set one zfs filesystem. When i use iostat -xc the output as below. As you see the value for vdc4 is quite high.

Code:
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
vdc0      0.6    3.9   10.8   37.5  0.0  0.0    1.9   0   0  10  7  0 83
vdc1     12.9    2.6 1644.2  309.9  0.0  0.1    7.6   0   1
vdc2      9.5    2.8 1208.8  351.9  0.0  0.1    8.4   0   1
vdc3      0.2    2.4   11.9   38.1  0.0  0.0    1.9   0   0
vdc4    266.6   83.1 32967.7 7561.5  0.0  3.2    9.1   0  65
vdc5      2.4    3.3  301.1  378.2  0.0  0.1   12.6   0   1
vdc6      5.8   52.1  715.3  718.0  0.0  0.1    2.4   0   6
vdc7      3.9   52.1  474.5  717.9  0.0  0.1    2.1   0   6
vdc8      0.0    0.0    0.0    0.0  0.0  0.0    2.3   0   0
nfs1      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0

When i look on the output for memstat. The ZFS filesystem taking high mem percentange.

Code:
> ::memstat
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                     355350              2776    8%
ZFS File Data             1660358             12971   40%
Anon                      1874388             14643   45%
Exec and libs               12338                96    0%
Page cache                 176508              1378    4%
Free (cachelist)             6483                50    0%
Free (freelist)            108879               850    3%
Total                    4194304             32768

Is this normal? When we database full backup the db will hang although the server load is normal during backup. Is that maybe related to the zfs filesystem setting? Hope can engligthen me on these.

Last edited by Don Cragun; 08-24-2015 at 01:37 AM.. Reason: Change ICODE tags to CODE tags, add ICODE tags.
 

7 More Discussions You Might Find Interesting

1. Solaris

How do I export a zfs filesystem that I created?

I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up. These are the steps I did: 1) Create the zpool using raidz1 across five disks. I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)
Discussion started by: sqa777
3 Replies

2. Solaris

Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points: > zpool history History for 'raidpool': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
Discussion started by: sqa777
0 Replies

3. Solaris

Mount old zfs filesystem

Hey all, I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz. So, after installing Opensolaris on the OS drives, how can I remount the storage raid? TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies

4. Filesystems, Disks and Memory

Howto Convert a filesystem from Veritas to ZFS?

Hi Folks, Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact. All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies

5. Solaris

Please explain why ZFS is said to be a hybrid filesystem and a volume manager also

Hi guys! How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain. I will appreciate your replies. Hope you can help me figure this out. Thanks in advance! (1 Reply)
Discussion started by: Klyde
1 Replies

6. Solaris

Extend zfs storage filesystem

Hello, Need to ask the question regarding extending the zfs storage file system. currently after using the command, df -kh u01-data-pool/data 600G 552 48G 93% /data /data are only 48 gb remaining and it has occupied 93% for total storage. zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies

7. UNIX for Beginners Questions & Answers

How to finish expanding a zfs filesystem?

I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk. what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies
FS_SNAPSHOT_CREATE(2)					      BSD System Calls Manual					     FS_SNAPSHOT_CREATE(2)

NAME
fs_snasphot_create -- create read only snapshot of a mounted filesystem SYNOPSIS
#include <sys/attr.h> #include <sys/snapshot.h> int fs_snapshot_create(int dirfd, const char * name, uint32_t flags); int fs_snapshot_delete(int dirfd, const char * name, uint32_t flags); int fs_snapshot_list(int dirfd, struct attrlist * name, void * attrbuf, size_t bufsize, uint32_t flags); int fs_snapshot_rename(int dirfd, const char * old, const char * new, uint32_t flags); int fs_snapshot_mount(int dirfd, const char * dir, const char * snapshot, uint32_t flags); int fs_snapshot_revert(int dirfd, const char * name, uint32_t flags); DESCRIPTION
The fs_snapshot_create() function, for supported Filesystems, causes a snapshot of the Filesystem to be created. A snapshot is a read only copy of the filesystem frozen at a point in time. The Filesystem is identified by the dirfd parameter which should be a file descriptor associated with the root directory of the filesystem for which the snapshot is to be created. name can be any valid name for a component name (except . and ..). The fs_snapshot_delete() function causes the named snapshot name to be deleted and the fs_snapshot_rename() function causes the named snapshot old to be renamed to the name new. Available snapshots along with their attributes can be listed by calling fs_snapshot_list() which is to be used in exactly the same way as getattrlistbulk(2). The flags parameter specifies the options that can be passed. No options are currently defined. Snapshots may be useful for backing up the Filesystem and to restore the Filesystem to a previous state. Snapshots are expected to consume no additional storage on creation but might consume additional storage as the active Filesystem is modified. Similarly deletion of files on the active filesystem may not result in the storage being available if the snapshot contains the file. Additionally, the underlying Filesys- tem may impose a limit on the number of snapshots that can be taken. For supporting Filesystems, a snapshot may be used as a source for a mount. This can be done by the fs_snapshot_mount() function. The snapshot will be mounted read only. When a snapshot is mounted, it cannot be deleted but it can be renamed. To revert the filesystem to a previous snapshot, the fs_snapshot_revert() can be used. It should be noted that reverting a filesystem to a snapshot is a destructive operation and causes all changes made to the filesystem (including snapshots cre- ated after the snapshot being reverted to) to be lost. All snapshot functions require superuser privileges and also require an additional entitlement. RETURN VALUES
Upon successful completion, fs_snapshot_create() , fs_snapshot_delete() and fs_snapshot_rename() returns 0. Otherwise, a value of -1 is returned and errno is set to indicate the error. fs_snapshot_list() returns the number of entries successfully read. A return value of 0 indicates there are no more entries. Otherwise, a value of -1 is returned and errno is set to indicate the error. Return values are the same as getattrlistbulk(2). COMPATIBILITY
Not all volumes support snapshots. A volume can be tested for snapshot support by using getattrlist(2) to get the volume capabilities attribute ATTR_VOL_CAPABILITIES, and then testing the VOL_CAP_INT_SNAPSHOT flag. ERRORS
The fs_snapshot_create() , fs_snapshot_delete() , fs_snapshot_rename() and fs_snapshot_list() function will fail if: [EACCES] Read permissions are denied for the caller on the filesystem [ENOTSUP] The underlying filesystem does not support this call. [EINVAL] The value of the flags parameter is invalid. [ENOSPC] There is no free space remaining on the file system containing the file. [ENOSPC] The limit for the maximum number of snapshots for a filesystem has been reached. [EIO] An I/O error occurred while reading from or writing to the file system. [EPERM] The calling process does not have appropriate privileges. [EROFS] The requested operation requires modifications in a read-only file system. [ENAMETOOLONG] The length of a component of a pathname is longer than {NAME_MAX}. [EBADF] dirfd is not a valid file descriptor. [ENOTDIR] dirfd is a file descriptor associated with a non-directory file. In addition, the fs_snapshot_create() or fs_snapshot_rename() functions may fail with the following errors [EEXIST] The The named snapshot to be created already exists or the new name already exists for the snapshot being renamed. fs_snapshot_create() or fs_snapshot_rename() functions may fail with the following errors [ENOENT] The named snapshot does not exist. fs_snapshot_delete() function may fail with [EBUSY] The named snapshot is currently mounted. SEE ALSO
getattrlist(2), getattrlistbulk(2) HISTORY
The fs_snapshot_create() , fs_snapshot_delete() , fs_snapshot_delete() and fs_snapshot_list() function calls appeared in macOS version 10.13 Darwin July 4th, 2017 Darwin
All times are GMT -4. The time now is 06:38 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy