Not able to increase ZFS file system on NGZ


 
Thread Tools Search this Thread
Operating Systems Solaris Not able to increase ZFS file system on NGZ
# 8  
Old 10-17-2013
This will a long list, so here I just grep part of that zone
Code:
root@ctdp04_vs03:/# zpool list | grep -i ttms
pttmsp01_app_pool   26.2G  14.6G  11.6G    55%  ONLINE  -
pttmsp01_root_pool  8.69G  4.04G  4.65G    46%  ONLINE  -
root@ctdp04_vs03:/#
root@ctdp04_vs03:/# zfs list | grep -i ttms
pttmsp01_app_pool            24.2G  1.57G    18K  /pttmsp01_app_pool
pttmsp01_app_pool/ttms         21K   205M    21K  /zone/ctdp04_vs03-pttmsp01/ttms
pttmsp01_app_pool/ttms_apps  6.51G  5.49G  6.51G  /zone/ctdp04_vs03-pttmsp01/ttms/apps
pttmsp01_app_pool/ttms_prod  8.05G  3.95G  8.05G  /zone/ctdp04_vs03-pttmsp01/ttms/prod
pttmsp01_root_pool           8.01G   554M    18K  /pttmsp01_root_pool
pttmsp01_root_pool/zone      4.03G  3.97G  4.03G  /zone/ctdp04_vs03-pttmsp01/root

# 9  
Old 10-18-2013
Quote:
Originally Posted by solaris_1977
This will a long list, so here I just grep part of that zone
Code:
root@ctdp04_vs03:/# zpool list | grep -i ttms
pttmsp01_app_pool   26.2G  14.6G  11.6G    55%  ONLINE  -
pttmsp01_root_pool  8.69G  4.04G  4.65G    46%  ONLINE  -
root@ctdp04_vs03:/#
root@ctdp04_vs03:/# zfs list | grep -i ttms
pttmsp01_app_pool            24.2G  1.57G    18K  /pttmsp01_app_pool
pttmsp01_app_pool/ttms         21K   205M    21K  /zone/ctdp04_vs03-pttmsp01/ttms
pttmsp01_app_pool/ttms_apps  6.51G  5.49G  6.51G  /zone/ctdp04_vs03-pttmsp01/ttms/apps
pttmsp01_app_pool/ttms_prod  8.05G  3.95G  8.05G  /zone/ctdp04_vs03-pttmsp01/ttms/prod
pttmsp01_root_pool           8.01G   554M    18K  /pttmsp01_root_pool
pttmsp01_root_pool/zone      4.03G  3.97G  4.03G  /zone/ctdp04_vs03-pttmsp01/root

Well the size of the zpool pttmsp01_app_pool is not 36GB. Is there a reason Why you believe that each disk is 9GB?

also what does the following command show?

zfs list -r | grep -i ttms
# 10  
Old 10-18-2013
Yes Busi286, Storage team gave a lun of 9GB and that is emcpower56. All other disks are also of same size, when I check it from inq or format.
Code:
root@ctdp04_vs03:/# zfs list -r | grep -i ttms
pttmsp01_app_pool            24.2G  1.57G    18K  /pttmsp01_app_pool
pttmsp01_app_pool/ttms         21K   205M    21K  /zone/ctdp04_vs03-pttmsp01/ttms
pttmsp01_app_pool/ttms_apps  6.51G  5.49G  6.51G  /zone/ctdp04_vs03-pttmsp01/ttms/apps
pttmsp01_app_pool/ttms_prod  7.98G  4.02G  7.98G  /zone/ctdp04_vs03-pttmsp01/ttms/prod
pttmsp01_root_pool           8.01G   554M    18K  /pttmsp01_root_pool
pttmsp01_root_pool/zone      4.03G  3.97G  4.03G  /zone/ctdp04_vs03-pttmsp01/root
root@ctdp04_vs03:/# inq -nodots | grep -i emcpower56
/dev/rdsk/emcpower56c                :EMC     :SYMMETRIX       :5773  :17!jn000   :9144000
root@ctdp04_vs03:/#
root@ctdp04_vs03:/# zpool status pttmsp01_app_pool
  pool: pttmsp01_app_pool
 state: ONLINE
 scrub: none requested
config:
        NAME           STATE     READ WRITE CKSUM
        pttmsp01_app_pool  ONLINE       0     0     0
          emcpower51c  ONLINE       0     0     0
          emcpower52c  ONLINE       0     0     0
          emcpower53c  ONLINE       0     0     0
          emcpower56a  ONLINE       0     0     0
errors: No known data errors

# 11  
Old 10-18-2013
For laughs and giggles, I created a pool with 4x 9GB disk.
I lost a about 200 MB, probably due to the superblock.

the mounted file system shows a loss of another 600MB. This is do to the fact that some of the space is reserved for handling reads and writes.

This is a far cry from from your 12GB loss. I think its time to have a talk with your storage team to see what their take on this is.

Code:
# cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t0d0                 disk         connected    configured   unknown
c1::dsk/c1t1d0                 disk         connected    configured   unknown
c1::dsk/c1t2d0                 disk         connected    configured   unknown
c1::dsk/c1t3d0                 disk         connected    configured   unknown
c1::dsk/c1t4d0                 disk         connected    configured   unknown
usb0/1                         unknown      empty        unconfigured ok
usb0/2                         unknown      empty        unconfigured ok
usb0/3                         unknown      empty        unconfigured ok
usb0/4                         unknown      empty        unconfigured ok
usb0/5                         unknown      empty        unconfigured ok
usb0/6                         unknown      empty        unconfigured ok
usb0/7                         unknown      empty        unconfigured ok
usb0/8                         unknown      empty        unconfigured ok
usb1/1                         unknown      empty        unconfigured ok
usb1/2                         unknown      empty        unconfigured ok
usb1/3                         unknown      empty        unconfigured ok
usb1/4                         unknown      empty        unconfigured ok
usb1/5                         unknown      empty        unconfigured ok
usb1/6                         unknown      empty        unconfigured ok
usb1/7                         unknown      empty        unconfigured ok
usb1/8                         unknown      empty        unconfigured ok

# zpool create data c1t1d0 c1t2d0 c1t3d0 c1t4d0
# zpool list
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
data  35.8G  79.5K  35.7G     0%  ONLINE  -
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
data    75K  35.2G    21K  /data
#


Last edited by Scrutinizer; 10-22-2013 at 10:28 PM.. Reason: code tags
# 12  
Old 10-22-2013
Hi, what happen with this issue? did you guys figure out the problem?
Was the storage team able to provide any insight?
# 13  
Old 10-22-2013
busi, from storage team it was fine. After some struggle, we took downtime from application team. Took 5 disk of 9 GB each, mounted them as _new and resynched data. Later unmounted older one and renamed _new file-systems to match original one. Means, had to work on completely from scratch :-)
Storage team suggested that we should use ecmpower56c, as c represents 3 rd slice (whole disk).
# 14  
Old 10-22-2013
I see. Glad to see you made progress with it.

On a side note if you have any new employees needing to learn ZFS, please refere them to my video

Zeta file system - YouTube
This User Gave Thanks to busi386 For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

How to grow a zfs file system?

Hi I have the following file system, that needs to be expanded to more 500Gb, so a total of 1Tb: df -h /oradata1 Filesystem Size Used Available Capacity Mounted on oradata1 587G 517G 69G 89% /oradata1 I am not familiar with zfs, I am more... (17 Replies)
Discussion started by: fretagi
17 Replies

2. AIX

100% full file system - error when trying to increase

Hi, I came across a situation where I had to increase a filesystem that was 100% full and received this error: /tmp:chfs -a size=+5000M /tmp 0516-634 lquerypv: /tmp directory does not have enough space, delete some files and try again. 0516-788 extendlv: Unable to extend logical... (2 Replies)
Discussion started by: BG_JrAdmin
2 Replies

3. Red Hat

Procedure to increase file system

Hi, How/what is the procedure to increase file system in linux server ? Regards, Maddy (6 Replies)
Discussion started by: Maddy123
6 Replies

4. HP-UX

[Solved] Increase the file system size

Dear Friends, I would like to increase the size of a file system from 10GB to 15GB. System is runing on HP-UX 11.31. Please help in the matter. Regards, Bhagawati Pandey (3 Replies)
Discussion started by: BPANDEY
3 Replies

5. Solaris

increase SWAP on ZFS file system

Hi All, I am using this commands to dynamically increase ZFS swap space on Solaris my question is: 1- after i make these commands it will permanent or it will remove after restart 2- how to make it permanent # swap -l swapfile dev swaplo bloques libre /dev/zvol/dsk/rpool/swap... (4 Replies)
Discussion started by: osmanux
4 Replies

6. Red Hat

Increase root file system size ...

Hello Admins, I am running a redhat linux 5 on vmware workstation. I need to increase or add some more space to my root (/) partition. I don't have any LVM configured.. Please suggest. # df -kh Filesystem Size Used Avail Use% Mounted on /dev/sda2 3.8G 3.1G ... (4 Replies)
Discussion started by: snchaudhari2
4 Replies

7. AIX

increase the size of file system

Hi all, we are usig aix 4.3 and i need to increase the size of "/u01" file sytem which is mounted on logical volume "lv00", but "/u01" file system size is 9 GB and logical volume "lvoo" size 9 GB.how do i increase the size of /u01.do i increase the size of logical volume "lv00" and then... (2 Replies)
Discussion started by: younusdba
2 Replies

8. UNIX for Dummies Questions & Answers

zfs file system

Hi, I try add a new file system: #zfs create dsk1/mqm it came back with: #cannot create 'dsk1/mqm': no such pool 'dsk1' what do I have to do? Kind regards Mehrdad (2 Replies)
Discussion started by: mehrdad68
2 Replies

9. Solaris

increase root file system size in solaris

Hi frnz, Need an urgent help... I have installed solaris 8 in a sunblade workstation with 136GB hdd. While installation it has taken a default filesystem size as 1.37GB for root. AFtr completing the installation i have extended the root partition to 130GB. But still df output shows... (4 Replies)
Discussion started by: sriram.s
4 Replies

10. UNIX for Dummies Questions & Answers

Increase space in the solaris file system

How can I check how much space is left in the solaris file system? and how can I increase those space in the file system?. I am trying to install Oracle Database on Solaris 8.But, it keep giving me error message says that"There is not enough space on the volume you have specified". Thanks ... (1 Reply)
Discussion started by: jung1975
1 Replies
Login or Register to Ask a Question