Sponsored Content
Operating Systems Solaris Resize LUNs and zfs-pool on sun cluster Post 302282873 by funksen on Monday 2nd of February 2009 05:15:04 AM
Old 02-02-2009
Resize LUNs and zfs-pool on sun cluster

Hi,

I need to increase the size of a zfs filesystem, which lies on two mirrored san luns


Code:
root@xxxx1:/tttt/DB-data-->zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
xxxx-data-zpool      3.97G   2.97G   1.00G    74%  ONLINE     /
xxxx-logs-zpool      15.9G   3.42G   12.5G    21%  ONLINE     /

root@usxxxx1:/tttt/DB-data-->zpool status
  pool: xxxx-data-zpool
 state: ONLINE
 scrub: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        xxxx-data-zpool                          ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c3t600A0B80001138280000A63C48183A82d0  ONLINE       0     0     0
            c3t600A0B800011384A00005A5548183AF1d0  ONLINE       0     0     0

errors: No known data errors

  pool: xxxx-logs-zpool
 state: ONLINE
 scrub: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        xxxx-logs-zpool                          ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c3t600A0B8000115C2C0000A1F548182CFAd0  ONLINE       0     0     0
            c3t600A0B80001159220000610D48182893d0  ONLINE       0     0     0

errors: No known data errors


root@xxxx1:/tttt/DB-data-->zfs list
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
xxxx-data-zpool                        2.97G   964M  26.5K  //xxxx-data-zpool
xxxx-data-zpool/tttt               2.97G   964M  24.5K  //xxxx-data-zpool/tttt
xxxx-data-zpool/tttt/DB-data       2.97G   547M  2.97G  //tttt/DB-data
xxxx-logs-zpool                        3.42G  12.2G  26.5K  //xxxx-logs-zpool
xxxx-logs-zpool/apache2-data            451M  1.56G   451M  ///tttt/apache2-data
xxxx-logs-zpool/tttt               2.98G  12.2G  24.5K  //xxxx-logs-zpool/tttt
xxxx-logs-zpool/tttt/DB-backups    2.81G  9.19G  2.81G  //tttt/DB-backups
xxxx-logs-zpool/tttt/DB-translogs   182M   118M   182M  //tttt/DB-translogs
14 substitutions on 9 lines




need to increase the luns from xxxx-data-zpool, and the fs //tttt/DB-data

Code:
root@xxxx1:/-->showrev
Hostname: xxxx1
Hostid: 84a8de3c
Release: 5.10
Kernel architecture: sun4v
Application architecture: sparc
Hardware provider: Sun_Microsystems
Domain:
Kernel version: SunOS 5.10 Generic_127127-11


Storage is an IBM DS4800

the Machine is part of a two-Node-Cluster with SUN-Cluster, in Case of failover, luns and zpool is taken online on the second node


on AIX you have to increase the luns on the storage, and then run chvg -g vgname, is there such a command for zfs pool on solaris, and is it possible while operating?


cheers funksen
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Installing Sun Cluster on ZFS root pools

Hi All! I have been tasked with creating a clustered file system for two systems running Sol 10 u8. These systems have 3 zones each and the global zone has ZFS on the boot disk. We currently have one system sharing an NFS mount to both of these systems. The root zfs pool status (on the... (2 Replies)
Discussion started by: bluescreen
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

8. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

9. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

10. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies
4S-BACKEND_SETUP(1J)						      4store						      4S-BACKEND_SETUP(1J)

NAME
4s-backend-setup -- Create a new 4store KB SYNOPSIS
4s-backend-setup kbname [--node node-number] [--cluster cluster-size] [--segments segment-count] kb-name --node Number of this node in the cluster, values range from 0 to cluster-size - 1. The default is 0. --cluster The number of nodes in the cluster. The default is 1. --segments The number of segments in the cluster. The default is 2. We recommend one for each CPU core in the cluster as a good starting point. Higher numbers tend to consume more resources, but may result in increased performance. NOTES
Once crated with 4s-backend-setup KBs should be started with 4s-backend(1) SEE ALSO
4s-query(1), 4s-size(1), 4s-httpd(1), 4s-backend(1), 4s-delete-model(1) EXAMPLES
$ 4s-backend-setup --node 0 --cluster 1 --segments 4 demo Creates the indexes for a single-machine KB with four segments, named "demo". 4store May 31, 2019 4store
All times are GMT -4. The time now is 10:03 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy