Resize LUNs and zfs-pool on sun cluster


 
Thread Tools Search this Thread
Operating Systems Solaris Resize LUNs and zfs-pool on sun cluster
# 1  
Old 02-02-2009
Resize LUNs and zfs-pool on sun cluster

Hi,

I need to increase the size of a zfs filesystem, which lies on two mirrored san luns


Code:
root@xxxx1:/tttt/DB-data-->zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
xxxx-data-zpool      3.97G   2.97G   1.00G    74%  ONLINE     /
xxxx-logs-zpool      15.9G   3.42G   12.5G    21%  ONLINE     /

root@usxxxx1:/tttt/DB-data-->zpool status
  pool: xxxx-data-zpool
 state: ONLINE
 scrub: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        xxxx-data-zpool                          ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c3t600A0B80001138280000A63C48183A82d0  ONLINE       0     0     0
            c3t600A0B800011384A00005A5548183AF1d0  ONLINE       0     0     0

errors: No known data errors

  pool: xxxx-logs-zpool
 state: ONLINE
 scrub: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        xxxx-logs-zpool                          ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c3t600A0B8000115C2C0000A1F548182CFAd0  ONLINE       0     0     0
            c3t600A0B80001159220000610D48182893d0  ONLINE       0     0     0

errors: No known data errors


root@xxxx1:/tttt/DB-data-->zfs list
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
xxxx-data-zpool                        2.97G   964M  26.5K  //xxxx-data-zpool
xxxx-data-zpool/tttt               2.97G   964M  24.5K  //xxxx-data-zpool/tttt
xxxx-data-zpool/tttt/DB-data       2.97G   547M  2.97G  //tttt/DB-data
xxxx-logs-zpool                        3.42G  12.2G  26.5K  //xxxx-logs-zpool
xxxx-logs-zpool/apache2-data            451M  1.56G   451M  ///tttt/apache2-data
xxxx-logs-zpool/tttt               2.98G  12.2G  24.5K  //xxxx-logs-zpool/tttt
xxxx-logs-zpool/tttt/DB-backups    2.81G  9.19G  2.81G  //tttt/DB-backups
xxxx-logs-zpool/tttt/DB-translogs   182M   118M   182M  //tttt/DB-translogs
14 substitutions on 9 lines




need to increase the luns from xxxx-data-zpool, and the fs //tttt/DB-data

Code:
root@xxxx1:/-->showrev
Hostname: xxxx1
Hostid: 84a8de3c
Release: 5.10
Kernel architecture: sun4v
Application architecture: sparc
Hardware provider: Sun_Microsystems
Domain:
Kernel version: SunOS 5.10 Generic_127127-11


Storage is an IBM DS4800

the Machine is part of a two-Node-Cluster with SUN-Cluster, in Case of failover, luns and zpool is taken online on the second node


on AIX you have to increase the luns on the storage, and then run chvg -g vgname, is there such a command for zfs pool on solaris, and is it possible while operating?


cheers funksen
# 2  
Old 02-04-2009
perhaps any experience on this without cluster? so just with the extend of a LUN and mirrored zpool?
# 3  
Old 02-04-2009
You can't do it with ZFS I think. Rather you need to add another LUN to the zpool.
# 4  
Old 02-08-2009
I too had this problem and created a new LUN and added to the zpool (safe way and it works). In your case, it looks like you are creating mirror from 2 different controller/disk unit so you have to create 1 Lun in each and add to the zpool as mirror.
# 5  
Old 02-08-2009
Another thought, but try it with file before implementing it:
- Fail one drive (say c3t600A0B80001138280000A63C48183A82d0).
- Delete this Lun from disk unit and recreate with bigger size.
- Attach this newly created LUN to the same zpool (mirror mode).
- Wait till it syncs
- Fail the other LUN (c3t600A0B800011384A00005A5548183AF1d0)
- Delete this Lun from the disk unit and recreate with bigger size
- Attach the Lun to the same same pool
# 6  
Old 02-09-2009
thank you houston

the database is very small, so just adding new luns would result in 20 luns after two years in the pool Smilie

the mirror/unmirror method seems to be the best method I guess, the problem is, I can't test it, since that's our only san-attached solaris system
# 7  
Old 02-09-2009
# mkfile 1g file1
# mkfile 1g file2
# zpool create zphouston mirror /tmp/file1 /tmp/file2
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 984M 24K 984M 1% /zphouston
# mkfile 20m /zphouston/20megfile
# sum /zphouston/20megfile |tee /zphouston/sum
0 40960 /zphouston/20megfile
# zpool offline zphouston /tmp/file2
Bringing device /tmp/file2 offline
# zpool status zphouston
pool: zphouston
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zphouston DEGRADED 0 0 0
mirror DEGRADED 0 0 0
/tmp/file1 ONLINE 0 0 0
/tmp/file2 OFFLINE 0 0 0

errors: No known data errors
# rm file2
# mkfile 2g file2
# zpool replace zphouston /tmp/file2 /tmp/file2
# zpool status
pool: zphouston
state: DEGRADED
scrub: resilver completed with 0 errors on Mon Feb 9 14:01:22 2009
config:

NAME STATE READ WRITE CKSUM
zphouston DEGRADED 0 0 0
mirror DEGRADED 0 0 0
/tmp/file1 ONLINE 0 0 0
replacing DEGRADED 0 0 0
/tmp/file2/old UNAVAIL 0 0 0 cannot open
/tmp/file2 ONLINE 0 0 0

errors: No known data errors
# (after couple of minutes)
# zpool status zphouston
pool: zphouston
state: ONLINE
scrub: resilver completed with 0 errors on Mon Feb 9 14:01:22 2009
config:

NAME STATE READ WRITE CKSUM
zphouston ONLINE 0 0 0
mirror ONLINE 0 0 0
/tmp/file1 ONLINE 0 0 0
/tmp/file2 ONLINE 0 0 0

errors: No known data errors
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 984M 20M 964M 3% /zphouston
# zpool detach zphouston /tmp/file1
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 2.0G 20M 1.9G 1% /zphouston
# zpool status zphouston
pool: zphouston
state: ONLINE
scrub: resilver completed with 0 errors on Mon Feb 9 14:01:22 2009
config:

NAME STATE READ WRITE CKSUM
zphouston ONLINE 0 0 0
/tmp/file2 ONLINE 0 0 0

errors: No known data errors
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 2.0G 20M 1.9G 1% /zphouston
# rm file1
# mkfile 2g file1
# zpool attach zphouston /tmp/file2 /tmp/file1
# zpool status zphouston
pool: zphouston
state: ONLINE
scrub: resilver completed with 0 errors on Mon Feb 9 14:12:38 2009
config:

NAME STATE READ WRITE CKSUM
zphouston ONLINE 0 0 0
mirror ONLINE 0 0 0
/tmp/file2 ONLINE 0 0 0
/tmp/file1 ONLINE 0 0 0

errors: No known data errors
# df -h /zphouston
Filesystem size used avail capacity Mounted on
zphouston 2.0G 20M 1.9G 1% /zphouston
# sum /zphouston/20megfile
0 40960 /zphouston/20megfile
# cat /zphouston/sum
0 40960 /zphouston/20megfile
#
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

2. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

3. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

4. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

Installing Sun Cluster on ZFS root pools

Hi All! I have been tasked with creating a clustered file system for two systems running Sol 10 u8. These systems have 3 zones each and the global zone has ZFS on the boot disk. We currently have one system sharing an NFS mount to both of these systems. The root zfs pool status (on the... (2 Replies)
Discussion started by: bluescreen
2 Replies

7. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

8. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

9. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

10. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies
Login or Register to Ask a Question