Sponsored Content
Operating Systems Linux Red Hat File system full, but not really. Post 302521864 by geelsu on Thursday 12th of May 2011 01:08:03 PM
Old 05-12-2011
File system full, but not really.

Hey all,

What do you think mostly happened in the following situation?

I have a Red Hat 5.5 server. Someone, somehow, managed to get two .nfs000.... type files that totaled over a terabyte in size. I removed them and thought things were back to normal. Then I started getting complains from users via a desktop popup they have that /home was full. I ssh'ed to the server and did a df and sure enough the df reported 100% in use. But a du reported only 109 gigs in use. The filesystem is 1.3 Terabytes. I cleaned up a few things and monitored the usage with df. This extra space very rapidly disappeared and df again reported 100%.

So I rebooted. After the reboot df reported only 20% in use and du backed it up. I did notice during the reboot that the hard drive lights were flashing widely which was an indication to me that the RAID 5 was rebuilding itself or doing some sort of QC. It actually took a while for the system to come up and I was getting pdflush timeouts before nash actually started.

Those .nfs files obviously maxed out the filesystem, but why would the system not be able to access it's disk space properly after their removal?

Thanks.
 

9 More Discussions You Might Find Interesting

1. UNIX Desktop Questions & Answers

file system full

When I try to log in as root I get the following message realloccg /: file system full sendmail :NO Queue:low on space (have 0,SMTP-DAEMON needs 101 in /var/spool/mqueue) What should I do? (1 Reply)
Discussion started by: hopeless
1 Replies

2. UNIX for Advanced & Expert Users

Full File System

Hi All, There was a background process running on a Solaris 2.8 machine, and appeared to have filled all available disk-space. I done a killall, and upon re-booting found that the file system had filled up, and will not boot as normal as a result. For example, I'm getting /usr/adm/messages: No... (8 Replies)
Discussion started by: Breen
8 Replies

3. Filesystems, Disks and Memory

root file system full

Hi I have a Solaris 2.5.1 system. Recently my file system is full and i couldn't find what flood my root file system. Anyone can suggext any directories i should look out for. I am using Samba and Patrol agent. I am just usng this server as a file server, users cannot login into the system,... (1 Reply)
Discussion started by: owls
1 Replies

4. Solaris

File system full?

Hi, I just started working with UNIX on an old semi-fossilized Sun workstation which I use to process LOTS of images,however, I just started to get an error message that the file system is full and then my shell tool or/and text editor freeze up. Help? (8 Replies)
Discussion started by: Bend
8 Replies

5. Solaris

Full file system?

I read the sticky and thought of a script I use on a regular basis. Since unless you patch/upgrade the df command on solaris you have a very tought time teling how full the system truly is. Output looks like $ biggest.sh /tmp Filesystem kbytes used avail capacity Mounted... (0 Replies)
Discussion started by: meyerder
0 Replies

6. Solaris

file system full

I am receving following Error message in /var/adm/messages "NOTICE: alloc: /: file system full" Disk space usage is as beklow: df -k $ Filesystem kbytes used avail capacity Mounted on /dev/md/dsk/d10 76678257 56962561 18948914 76% / /proc ... (8 Replies)
Discussion started by: Asteroid
8 Replies

7. Solaris

file system full

hello Even though I am not out of inodes or of space, the /var/adm/messages shows messages: file system full I am doing now fcsk -m (400G) and I am still waiting to see the fragmentation results (should I add another option to df to have a faster output?) Do you have any other hints... (6 Replies)
Discussion started by: melanie_pfefer
6 Replies

8. Solaris

/ file system full issue

Hi All, This is Babu working as a system administrator. Here I am getting one problem with one of my Sun server's root (/) file system. In df -h command / file system showing 7.8 GB used space.But in du -hd command it showing 5.2 gb only. Please can any one help me resolve this issue... (2 Replies)
Discussion started by: lbreddy
2 Replies

9. UNIX for Advanced & Expert Users

/opt file system full !!!

Can anyone help me in cleaning /opt filesystem.. i have checked all the options and i have cleared all the logs and the total size of the files in /opt is shown as 1.8GB were as the size of /opt is 4.8GB but wen i run the command # df -h /opt it gives capacity 99% Please help... (17 Replies)
Discussion started by: kjamsheed
17 Replies
volinfo(8)						      System Manager's Manual							volinfo(8)

NAME
volinfo - Print accessibility and usability of volumes SYNOPSIS
/usr/sbin/volinfo [-Vp] [-g diskgroup] [-U usetype] [-o useopt] [volume...] OPTIONS
The following options are recognized: Writes a list of utilities that would be called from volinfo, along with the arguments that would be passed. The -V performs a ``mock run'' so the utilities are not actually called. Reports the name and condition of each plex in each reported volume. Specifies the usage type for the operation. If no volume operands are specified, the output is restricted to volumes with this usage type. If volume operands are specified, this will result in a failure message for all named volumes that do not have the indi- cated usage type. Specifies the disk group for the operation, either by disk group ID or by disk group name. By default, the disk group is chosen based on the volume operands. If no volume operands are specified, the disk group defaults to rootdg. Passes in usage-type-specific options to the operation. This option is currently unsupported. DESCRIPTION
The volinfo utility reports a usage-type-dependent condition on one or more volumes in a disk group. A report for each volume specified by the volume operand is written to the standard output. If no volume operands are given, a volume condition report is provided for each vol- ume in the selected disk group. Each invocation can be applied to only one disk group at a time, due to internal implementation constraints. Any volume operands will be used to determine a default disk group, according to the standard disk group selection rules described in volintro(8). A specific disk group can be forced with -g diskgroup. Output Format Summary reports for each volume are printed in one-line output records. Each volume output line consists of blank-separated fields for the volume name, volume usage type, and volume condition. Each plex output line consists of blank-separated fields for the plex name and the plex condition. The following example shows the volume summary: # volinfo bigvol fsgen Startable vol2 fsgen Started brokenvol gen Unstartable The following example shows the plex summary, with the plex records accompanied by their volume records: # volinfo -p vol bigvol fsgen Startable plex bigvol-01 ACTIVE vol vol2 fsgen Started plex vol2-01 ACTIVE vol brokenvol gen Unstartable Volume Conditions The volume condition is a usage-type-dependent summary of the state of a volume. This condition is derived from the volume's kernel-enabled state and the usage-type-dependent states of the volume's plexes. Volume conditions for the fsgen and gen usage types are reported as follows: The volume is not enabled and at least one of the plexes has a reported condition of ACTIVE or CLEAN. A volume startall operation would likely succeed in starting a volume in this condition. The vol- ume is not enabled and fails to meet the criteria for being Startable. A volume in this condition is not started and may be configured incorrectly or prevented from automatic startup (with volume startall) because of errors or other conditions. The volume is enabled and at least one of the associated plexes is enabled in read-write mode (which is normal for enabled plexes in the ACTIVE and EMTPY conditions). A volume in this condition has been started and can be used. The volume is enabled, but the volume does not meet the criteria for being Started. A volume in this condition has been started, but is inaccessible because of errors that have occurred since the volume was started, or because of administrative actions, such as voldg -k rmdisk. Volume conditions for volumes of the raid5 usage type include the following conditions used for the fsgen and gen usage types: Startable, Unstartable, Started, Started Unusable Additional volume conditions for raid5 volumes are: The RAID-5 plex of the volume is in degraded mode due to the unavailability of a sub- disk in that plex. Some of the parity in the RAID-5 plex is stale and requires recovery. Plex Conditions The following plex conditions (reported with -p) are reported for the fsgen and gen usage types: No physical disk was found for one of the subdisks in the plex. This implies either that the physical disk failed, making it unrecognizable, or that the physical disk is no longer attached through a known access path. A physical disk used by one of the subdisks in the plex was removed through administrative action with voldg -k rmdisk. The plex was detached from use as a result of an uncorrectable I/O failure on one of the subdisks in the plex. The plex does not contain valid data, either as a result of a disk replacement affecting one of the subdisks in the plex, or as a result of an administrative action on the plex such as volplex det. The plex contains valid data and the volume was stopped cleanly. Either the volume is started and the plex is enabled, or the volume was not stopped cleanly and the plex was valid when the volume was stopped. The plex was disabled using the volmend off operation. The plex is part of a volume that has not yet been initialized. The plex is associated tempo- rarily as part of a current operation, such as volplex cp or volplex att. A system reboot or manual starting of a volume will dissociate the plex. The plex was created for temporary use by a current operation. A system reboot or manual starting of a volume will remove the plex. The plex and its subdisks were created for temporary use by a current operation. A system reboot or manual starting of the volume will remove the plex and all of its subdisks. The plex is being attached as part of a backup operation by the volassist snapstart opera- tion. When the attach is complete, the condition will change to SNAPDONE. A system reboot or manual starting of the volume will remove the plex and all of its subdisks. A volassist snapstart operation completed the process of attaching the plex. It is a candidate for selection by the volassist snapshot operation. A system reboot or manual starting of the volume will remove the plex and all of its subdisks. The plex is being attached as part of a backup operation by the volplex snapstart operation. When the attach is complete, the condition will change to SNAPDIS. A system reboot or manual starting of the volume will dissociate the plex. A volassist snapstart operation completed the process of attaching the plex. It is a candidate for selection by the volplex snapshot operation. A system reboot or manual starting of the volume will dissociate the plex. Plexes of raid5 volumes can be either data plexes (that is, RAID-5 plexes) or log plexes. Plex conditions for RAID-5 plexes and log plexes include the following conditions used for the fsgen and gen usage types: NODAREC, REMOVED, IOFAIL, CLEAN, ACTIVE, OFFLINE RAID-5 plexes can have these additional conditions: Due to subdisk failures, the plex is in degraded mode. This indicates a loss of data redundancy in the RAID-5 volume and any further failures could cause data loss. The parity is not in sync with the data in the plex. This indicates a loss of data redundancy in the RAID-5 volume and any further failures could case data loss. A double failure occurred within the plex. The plex is unusable due to subdisk failures and/or stale parity. Log plexes of RAID-5 volumes can have this additional condition: The contents of the plex are not usable as logging data. EXIT CODES
The volinfo utility exits with a nonzero status if the attempted operation fails. A nonzero exit code is not a complete indicator of the problems encountered, but rather denotes the first condition that prevented further execution of the utility. See volintro(8) for a list of standard exit codes. SEE ALSO
volintro(8), volassist(8), volmend(8), volplex(8), volsd(8), volume(8) volinfo(8)
All times are GMT -4. The time now is 05:22 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy