I need to increase the size of a zfs filesystem, which lies on two mirrored san luns
need to increase the luns from xxxx-data-zpool, and the fs //tttt/DB-data
Storage is an IBM DS4800
the Machine is part of a two-Node-Cluster with SUN-Cluster, in Case of failover, luns and zpool is taken online on the second node
on AIX you have to increase the luns on the storage, and then run chvg -g vgname, is there such a command for zfs pool on solaris, and is it possible while operating?
I too had this problem and created a new LUN and added to the zpool (safe way and it works). In your case, it looks like you are creating mirror from 2 different controller/disk unit so you have to create 1 Lun in each and add to the zpool as mirror.
Another thought, but try it with file before implementing it:
- Fail one drive (say c3t600A0B80001138280000A63C48183A82d0).
- Delete this Lun from disk unit and recreate with bigger size.
- Attach this newly created LUN to the same zpool (mirror mode).
- Wait till it syncs
- Fail the other LUN (c3t600A0B800011384A00005A5548183AF1d0)
- Delete this Lun from the disk unit and recreate with bigger size
- Attach the Lun to the same same pool
# mkfile 1g file1
# mkfile 1g file2
# zpool create zphouston mirror /tmp/file1 /tmp/file2
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 984M 24K 984M 1% /zphouston
# mkfile 20m /zphouston/20megfile
# sum /zphouston/20megfile |tee /zphouston/sum
0 40960 /zphouston/20megfile
# zpool offline zphouston /tmp/file2
Bringing device /tmp/file2 offline
# zpool status zphouston
pool: zphouston
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: none requested
config:
errors: No known data errors
# (after couple of minutes)
# zpool status zphouston
pool: zphouston
state: ONLINE
scrub: resilver completed with 0 errors on Mon Feb 9 14:01:22 2009
config:
I accidently added a disk in different zpool instead of pool, where I want.
root@prtdrd21:/# zpool status cvfdb2_app_pool
pool: cvfdb2_app_pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this?
... (0 Replies)
I messed up my pool by doing zfs send...recive So I got the following :
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 928G 17.3G 911G 1% 1.00x ONLINE -
tank1 928G 35.8G 892G 3% 1.00x ONLINE -
So I have "tank1" pool.
zfs get all... (8 Replies)
installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.
Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Other than export/import, is there a cleaner way to rename a pool without unmounting de FS?
Something like, say "zpool rename a b"?
Thanks. (2 Replies)
Hi All!
I have been tasked with creating a clustered file system for two systems running Sol 10 u8. These systems have 3 zones each and the global zone has ZFS on the boot disk. We currently have one system sharing an NFS mount to both of these systems.
The root zfs pool status (on the... (2 Replies)
I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
I created a pool the other day. I created a 10 gig files just for a test, then deleted it.
I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool.
When I ls -al the pool I just... (6 Replies)
Hi all
I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM:
- 2 internal disks: c0t0d0 and c0t1d0
- bootable root-volume (mirrored, both disks)
- 1 non-mirrored swap slice
- 1 non-mirrored slices for Live... (1 Reply)