Sponsored Content
Operating Systems Solaris Strange space consumption on file-system Post 302997723 by solaris_1977 on Thursday 18th of May 2017 01:43:31 PM
Old 05-18-2017
Strange space consumption on file-system

Hello,
I have a x86 Solaris server running on VMWare. c1t0d0 is root disk of 40 GB. I am not able to find, where space is being consumed. It just available space is 2.6 GB only. There is no quota or reservation set. Can somebody give me some pointer to fix it ?
Code:
-bash-3.2# zpool list
NAME                 SIZE  ALLOC   FREE  CAP  HEALTH  ALTROOT
pbdbm-wst-adm1      19.9G  11.3G  8.56G  56%  ONLINE  -
pbdbm-wst-esrpapp1  19.9G  1.90G  18.0G   9%  ONLINE  -
pbdbm-wst-lngapp1   19.9G  1.99G  17.9G  10%  ONLINE  -
pbdbm-wst-lpgapp1   19.9G  1.57G  18.3G   7%  ONLINE  -
pbdbm-wst-lsrgapp1  19.9G  1.44G  18.4G   7%  ONLINE  -
rpool               39.8G  36.5G  3.26G  91%  ONLINE  -
-bash-3.2# zfs list | grep -i rpool
rpool                           36.5G  2.64G  42.5K  /rpool
rpool/ROOT                      12.1G  2.64G    31K  legacy
rpool/ROOT/s10x_u11wos_24a      12.1G  2.64G  6.45G  /
rpool/ROOT/s10x_u11wos_24a/var  5.63G  2.64G  5.63G  /var
rpool/dump                      2.00G  2.64G  2.00G  -
rpool/export                     184K  2.64G    32K  /export
rpool/export/home                152K  2.64G   152K  /export/home
rpool/shared                    15.9G  2.64G  15.9G  /zones/shared
rpool/swap                      6.50G  2.64G  6.50G  -
-bash-3.2#
-bash-3.2#
-bash-3.2# df -h / /var /usr /opt
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10x_u11wos_24a
                        39G   6.4G   2.6G    71%    /
rpool/ROOT/s10x_u11wos_24a/var
                        39G   5.6G   2.6G    69%    /var
rpool/ROOT/s10x_u11wos_24a
                        39G   6.4G   2.6G    71%    /
rpool/ROOT/s10x_u11wos_24a
                        39G   6.4G   2.6G    71%    /
-bash-3.2#
-bash-3.2# du -sh /var /usr /opt
 5.6G   /var
 5.7G   /usr
 313M   /opt
-bash-3.2# zfs get quota,reservation | grep -i rpool
rpool                           quota        none    default
rpool                           reservation  none    default
rpool/ROOT                      quota        none    default
rpool/ROOT                      reservation  none    default
rpool/ROOT/s10x_u11wos_24a      quota        none    default
rpool/ROOT/s10x_u11wos_24a      reservation  none    default
rpool/ROOT/s10x_u11wos_24a/var  quota        none    default
rpool/ROOT/s10x_u11wos_24a/var  reservation  none    default
rpool/dump                      quota        -       -
rpool/dump                      reservation  none    default
rpool/export                    quota        none    default
rpool/export                    reservation  none    default
rpool/export/home               quota        none    default
rpool/export/home               reservation  none    default
rpool/shared                    quota        none    default
rpool/shared                    reservation  none    default
rpool/swap                      quota        -       -
rpool/swap                      reservation  none    default
-bash-3.2#
-bash-3.2# zfs list -o space | grep -i rpool
rpool                           2.64G  36.5G         0   42.5K              0      36.5G
rpool/ROOT                      2.64G  12.1G         0     31K              0      12.1G
rpool/ROOT/s10x_u11wos_24a      2.64G  12.1G         0   6.45G              0      5.63G
rpool/ROOT/s10x_u11wos_24a/var  2.64G  5.63G         0   5.63G              0          0
rpool/dump                      2.64G  2.00G         0   2.00G          4.20M          0
rpool/export                    2.64G   184K         0     32K              0       152K
rpool/export/home               2.64G   152K         0    152K              0          0
rpool/shared                    2.64G  15.9G         0   15.9G              0          0
rpool/swap                      2.64G  6.50G         0   6.50G              0          0
-bash-3.2#

Thanks
 

10 More Discussions You Might Find Interesting

1. HP-UX

Root File system Space

Hi I'm using HP-UX 11.00, the root file system is as shown below. Several time it reach 100% used, to free some space I use to reboot the system. What can I do to free some space without rebooting the machine? Filesystem kbytes used avail %used Mounted on /dev/vg00/lvol3 ... (2 Replies)
Discussion started by: cgege
2 Replies

2. Solaris

Lost space on file system

Hi everybody, I got a problem on my SUN server in Solaris 9. I'll try to explain, if somebody could help me. I have mounted some volumes in RAID 0+1, that is stripped slices and then mirror. To be clear the result of metastat d80 is as follow : d80: Mirror Submirror 0: d81 State:... (2 Replies)
Discussion started by: aribault
2 Replies

3. UNIX for Dummies Questions & Answers

free some space in file system

dear all, I have some problem in my file system : df -k result : ... /dev/md/dsk/d3 3101839 2736380 303423 91% /var ... it occupied around 2.7 gb but when I tried command du -sk /var 813991 /var so /var only have around 800Mb , Do you know why there is many difference... (6 Replies)
Discussion started by: fu4d
6 Replies

4. UNIX for Dummies Questions & Answers

How to find a file whick is consuming larger disk space in file system

Hello, Can anybody please tell me the command to find out the filesystem or a file which is consuming larger disk space sing i want to find out the file and want to compress it please help me out any help would be appreciated (6 Replies)
Discussion started by: lokeshpashine
6 Replies

5. Shell Programming and Scripting

Get free space of file system

Hi guys, I'm trying to get free space in GB of file system into parameter. I have the following code: > cat get_free_space_FS.ksh #! /bin/ksh FS=/dw/mar FreeSpace=`df -h | grep $FS | awk '{print $4}'` echo $FreeSpace > ./get_free_space_FS.ksh 362G My question is ,how can I cut in... (4 Replies)
Discussion started by: nir_s
4 Replies

6. Linux

Allocating available space to file system

have a VMWARE machine, I have extended it from 20GB to 30GB for Linux box. The linux box shows this for df -hal: Filesystem Size Used Avail Use% Mounted on -dev-mapper-VolGroup00-LogVol00 19G 5.9G 12G 34% - proc 0 0 0 - -proc sysfs 0 0 0 - -sys devpts 0 0 0 - -dev-pts -dev-sda1 99M 13M... (1 Reply)
Discussion started by: mackman
1 Replies

7. Solaris

Solaris file system unable to use available space

Hi, The solaris filesystem /u01 shows available space as 100GB, and used space as 6 GB. The Problem is when iam trying to install some software or copy some files in this file system /u01 Iam unable to copy or install in this file system due to lack of space. ofcourse the software... (31 Replies)
Discussion started by: iris1
31 Replies

8. HP-UX

Increasing space in file system

Hi Friends, I want to cut space from one file system and add in another file system. For example I have 100 gb space in /oracle/TST/oraarch I wnat to cut 50 gb from this file system and add 50 in /oracle/TST/sapdata1. Please hel, How I can do it. Regards, Bhagawati Pandey (3 Replies)
Discussion started by: BPANDEY
3 Replies

9. Solaris

Solaris file system space

Hi Experts, I have query regarding output of df command. $ df -k Filesystem 1024-blocks Used Available Capacity Mounted on rpool/ROOT/solaris-161 191987712 6004395 140577816 5% / /devices 0 0 0 0% /devices... (3 Replies)
Discussion started by: sai_2507
3 Replies

10. AIX

No space in the file system

A file system has reached 100%. I have tried adding space using chfs -a size=+100 command to that file system. However, the % used is not decreasing from 100%. Is there a way to add more space? Also, can someone suggest a script to send a mail alert when a file system is reaching 90%. G (4 Replies)
Discussion started by: ggayathri
4 Replies
ZDB(8)							    BSD System Manager's Manual 						    ZDB(8)

NAME
zdb -- Display zpool debugging and consistency information SYNOPSIS
zdb [-CumdibcsDvhLMXFPA] [-e [-p path...]] [-t txg] [-U cache] [-I inflight I/Os] [-x dumpdir] poolname [object ...] zdb [-divPA] [-e [-p path...]] [-U cache] dataset [object ...] zdb -m [-MLXFPA] [-t txg] [-e [-p path...]] [-U cache] poolname zdb -R [-A] [-e [-p path...]] [-U cache] poolname poolname vdev:offset:size[:flags] zdb -S [-AP] [-e [-p path...]] [-U cache] poolname poolname zdb -l [-uA] device zdb -C [-A] [-U cache] DESCRIPTION
The zdb utility displays information about a ZFS pool useful for debugging and performs some amount of consistency checking. It is a not a general purpose tool and options (and facilities) may change. This is neither a fsck(8) nor a fsdb(8) utility. The output of this command in general reflects the on-disk structure of a ZFS pool, and is inherently unstable. The precise output of most invocations is not documented, a knowledge of ZFS internals is assumed. When operating on an imported and active pool it is possible, though unlikely, that zdb may interpret inconsistent pool data and behave erratically. OPTIONS
Display options: -b Display statistics regarding the number, size (logical, physical and allocated) and deduplication of blocks. -c Verify the checksum of all metadata blocks while printing block statistics (see -b). If specified multiple times, verify the checksums of all blocks. -C Display information about the configuration. If specified with no other options, instead display information about the cache file (/etc/zfs/zpool.cache). To specify the cache file to display, see -U If specified multiple times, and a pool name is also specified display both the cached configuration and the on-disk configuration. If specified multiple times with -e also display the configuration that would be used were the pool to be imported. -d Display information about datasets. Specified once, displays basic dataset information: ID, create transaction, size, and object count. If specified multiple times provides greater and greater verbosity. If object IDs are specified, display information about those specific objects only. -D Display deduplication statistics, including the deduplication ratio (dedup), compression ratio (compress), inflation due to the zfs copies property (copies), and an overall effective ratio (dedup * compress / copies). If specified twice, display a histogram of deduplication statistics, showing the allocated (physically present on disk) and refer- enced (logically referenced in the pool) block counts and sizes by reference count. If specified a third time, display the statistics independently for each deduplication table. If specified a fourth time, dump the contents of the deduplication tables describing duplicate blocks. If specified a fifth time, also dump the contents of the deduplication tables describing unique blocks. -h Display pool history similar to zpool history, but include internal changes, transaction, and dataset information. -i Display information about intent log (ZIL) entries relating to each dataset. If specified multiple times, display counts of each intent log transaction type. -l device Display the vdev labels from the specified device. If the -u option is also specified, also display the uberblocks on this device. -L Disable leak tracing and the loading of space maps. By default, zdb verifies that all non-free blocks are referenced, which can be very expensive. -m Display the offset, spacemap, and free space of each metaslab. When specified twice, also display information about the on-disk free space histogram associated with each metaslab. When specified three time, display the maximum contiguous free space, the in-core free space histogram, and the percentage of free space in each space map. When specified four times display every spacemap record. -M Display the offset, spacemap, and free space of each metaslab. When specified twice, also display information about the maximum con- tiguous free space and the percentage of free space in each space map. When specified three times display every spacemap record. -R poolname vdev:offset:size[:flags] Read and display a block from the specified device. By default the block is displayed as a hex dump, but see the description of the -r flag, below. The block is specified in terms of a colon-separated tuple vdev (an integer vdev identifier) offset (the offset within the vdev) size (the size of the block to read) and, optionally, flags (a set of flags, described below). b offset Print block pointer d Decompress the block e Byte swap the block g Dump gang block header i Dump indirect block r Dump raw uninterpreted block data -s Report statistics on zdb's I/O. Display operation counts, bandwidth, and error counts of I/O to the pool from zdb. -S Simulate the effects of deduplication, constructing a DDT and then display that DDT as with -DD. -u Display the current uberblock. Other options: -A Do not abort should any assertion fail. -AA Enable panic recovery, certain errors which would otherwise be fatal are demoted to warnings. -AAA Do not abort if asserts fail and also enable panic recovery. -e [-p path...] Operate on an exported pool, not present in /etc/zfs/zpool.cache. The -p flag specifies the path under which devices are to be searched. -x dumpdir All blocks accessed will be copied to files in the specified directory. The blocks will be placed in sparse files whose name is the same as that of the file or device read. zdb can be then run on the generated files. Note that the -bbc flags are sufficient to access (and thus copy) all metadata on the pool. -F Attempt to make an unreadable pool readable by trying progressively older transactions. -I inflight I/Os Limit the number of outstanding checksum I/Os to the specified value. The default value is 200. This option affects the performance of the -c option. -P Print numbers in an unscaled form more amenable to parsing, eg. 1000000 rather than 1M. -t transaction Specify the highest transaction to use when searching for uberblocks. See also the -u and -l options for a means to see the avail- able uberblocks and their associated transaction numbers. -U cachefile Use a cache file other than /boot/zfs/zpool.cache. -v Enable verbosity. Specify multiple times for increased verbosity. -X Attempt 'extreme' transaction rewind, that is attempt the same recovery as -F but read transactions otherwise deemed too old. Specifying a display option more than once enables verbosity for only that option, with more occurrences enabling more verbosity. If no options are specified, all information about the named pool will be displayed at default verbosity. EXAMPLES
Example 1 Display the configuration of imported pool 'rpool' # zdb -C rpool MOS Configuration: version: 28 name: 'rpool' ... Example 2 Display basic dataset information about 'rpool' # zdb -d rpool Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects ... Example 3 Display basic information about object 0 in 'rpool/export/home' # zdb -d rpool/export/home 0 Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects Object lvl iblk dblk dsize lsize %full type 0 7 16K 16K 15.0K 16K 25.00 DMU dnode Example 4 Display the predicted effect of enabling deduplication on 'rpool' # zdb -S rpool Simulated DDT histogram: bucket allocated referenced ______ ______________________________ ______________________________ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE ------ ------ ----- ----- ----- ------ ----- ----- ----- 1 694K 27.1G 15.0G 15.0G 694K 27.1G 15.0G 15.0G 2 35.0K 1.33G 699M 699M 74.7K 2.79G 1.45G 1.45G ... dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00 SEE ALSO
zfs(8), zpool(8) AUTHORS
This manual page is a mdoc(7) reimplementation of the illumos manual page zdb(1M), modified and customized for FreeBSD and licensed under the Common Development and Distribution License (CDDL). The mdoc(7) implementation of this manual page was initially written by Martin Matuska <mm@FreeBSD.org> and Marcelo Araujo <araujo@FreeBSD.org>. BSD
July 26, 2014 BSD
All times are GMT -4. The time now is 08:04 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy