Filesystems automatically umounted Closed/Synced


 
Thread Tools Search this Thread
Operating Systems AIX Filesystems automatically umounted Closed/Synced
# 1  
Old 05-07-2008
Data Filesystems automatically umounted Closed/Synced

Hello friends,

I am confused with one of aix filesystem problem.
On one of my server, some of my rootvg filesystems shows Closed/synced status for i.e /home, /var/adm/ras/platform
Everyday i manually mount these filesystems.

What is the reason causing filesystems to go in Closed/synced state?.
Also /etc/filesystems attributes seems to be normal. Please refer below outputs.

Code:
# lsvg -l rootvg 
rootvg:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
hd5                 boot       1     2     2    closed/syncd  N/A
hd6                 paging     32    64    2    open/syncd    N/A
hd8                 jfs2log    1     2     2    open/syncd    N/A
hd4                 jfs2       16    32    2    open/syncd    /
hd2                 jfs2       120   240   2    open/syncd    /usr
hd9var              jfs2       24    48    2    open/syncd    /var
hd3                 jfs2       40    80    2    open/syncd    /tmp
hd1                 jfs2       24    48    2    closed/syncd  /home
hd10opt             jfs2       24    48    2    open/syncd    /opt
fwdump              jfs2       1     2     2    closed/syncd  /var/adm/ras/platform
tsmtestlv           jfs        10    10    1    closed/syncd  N/A
fslv00              jfs2       1     1     1    closed/syncd  /testtsm
loglv06             jfslog     1     1     1    closed/syncd  N/A
fslv07              jfs2       8     8     1    closed/syncd  /tsmdata/toc
lv02                jfs        117   117   1    closed/syncd  /mkcd/cd_images

/etc/filesystems
Code:
 
/home:
        dev             = /dev/hd1
        vfs             = jfs2
        log             = /dev/hd8
        mount           = true
        check           = true
        vol             = /home
        free            = false

Please help ASAP.
# 2  
Old 05-08-2008
The "mount = TRUE" indicates that the FS would be mounted automatically during a reboot, so that rules a reboot out.

The only other way to get a FS into "closed" state is to umount it. Maybe this is done by some script, which runs frequently?

You could write a little script which tests if /home is still mounted in regular intervals and writes a timestamp to a log file each time it is. This way you cound find out when exactly the umount happens. Have a look then in the crontabs, maybe you can find the "offender".

Just guessing, but could it be some that a script mounts an NFS share, tries to umount it and simply gets it wrong - umounting not the NFS share but the /home FS?

I hope this helps.

bakunin
# 3  
Old 05-08-2008
Do you have automount enabled for these filesystems ... than they might appear unmounted as long as they are not used.

Rgds
zx
# 4  
Old 05-20-2008
Power

Quote:
Originally Posted by bakunin
The "mount = TRUE" indicates that the FS would be mounted automatically during a reboot, so that rules a reboot out.

The only other way to get a FS into "closed" state is to umount it. Maybe this is done by some script, which runs frequently?

You could write a little script which tests if /home is still mounted in regular intervals and writes a timestamp to a log file each time it is. This way you cound find out when exactly the umount happens. Have a look then in the crontabs, maybe you can find the "offender".

Just guessing, but could it be some that a script mounts an NFS share, tries to umount it and simply gets it wrong - umounting not the NFS share but the /home FS?

I hope this helps.

bakunin

Sorry for the late reply guys. Thanks Bak,
Ya your right. Theres a kinda backup script run by tivoli which has umount all and varyoffvg command for some of my apps vgs. But at later after the backup activity theres another script which varyons the same vgs and mounts my application filesystems.
So now the question arises is when the 1st script runs umount all why /home does get in closed/synced mode? though it is system related filesystem?
# 5  
Old 05-20-2008
Unmount them manually and then use whatever the script uses to mount them, probably mount -a and see if it fails. Sometimes /etc/filesystems needs pruning, though that's usually when you have nested filesystems like /home and /home/fsname, etc.
Login or Register to Ask a Question

Previous Thread | Next Thread

4 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

ssh_exchange_identification: Connection closed by remote host Connection closed

Hi Everyone, Good day. Scenario: 2 unix servers -- A (SunOS) and B (AIX) I have an ftp script to sftp 30 files from A to B which happen almost instantaneously i.e 30 sftp's happen at the same time. Some of these sftp's fail with the following error: ssh_exchange_identification: Connection... (1 Reply)
Discussion started by: jeevan_fimare
1 Replies

2. Solaris

my netscap browser automatically closed

Hi, i have just installed Sun Solaris 8 on the sun sparc server but when i run the browers and write a site to open it my browers automatically closed and when i run the browers window again it again automatically closed again. can any body help me to resolve this issue (1 Reply)
Discussion started by: adnangc
1 Replies

3. Shell Programming and Scripting

Filesystems GT 95%

Hi How can I only print the file systems that are more than 95% full. I used the df -k output and tried to check for each file system and then print only the ones that meet the criteria... But my solution seems cloodgie ... (3 Replies)
Discussion started by: YS2002
3 Replies

4. UNIX for Advanced & Expert Users

Filesystems

my partner change the server's ip address and now i can't to mount the oracle's filesystem, what i do? i don't want to reinstall Unix. My unix is SCO UNIX 5.0.5 (9 Replies)
Discussion started by: marun
9 Replies
Login or Register to Ask a Question