I am confused with one of aix filesystem problem.
On one of my server, some of my rootvg filesystems shows Closed/synced status for i.e /home, /var/adm/ras/platform
Everyday i manually mount these filesystems.
What is the reason causing filesystems to go in Closed/synced state?.
Also /etc/filesystems attributes seems to be normal. Please refer below outputs.
The "mount = TRUE" indicates that the FS would be mounted automatically during a reboot, so that rules a reboot out.
The only other way to get a FS into "closed" state is to umount it. Maybe this is done by some script, which runs frequently?
You could write a little script which tests if /home is still mounted in regular intervals and writes a timestamp to a log file each time it is. This way you cound find out when exactly the umount happens. Have a look then in the crontabs, maybe you can find the "offender".
Just guessing, but could it be some that a script mounts an NFS share, tries to umount it and simply gets it wrong - umounting not the NFS share but the /home FS?
The "mount = TRUE" indicates that the FS would be mounted automatically during a reboot, so that rules a reboot out.
The only other way to get a FS into "closed" state is to umount it. Maybe this is done by some script, which runs frequently?
You could write a little script which tests if /home is still mounted in regular intervals and writes a timestamp to a log file each time it is. This way you cound find out when exactly the umount happens. Have a look then in the crontabs, maybe you can find the "offender".
Just guessing, but could it be some that a script mounts an NFS share, tries to umount it and simply gets it wrong - umounting not the NFS share but the /home FS?
I hope this helps.
bakunin
Sorry for the late reply guys. Thanks Bak,
Ya your right. Theres a kinda backup script run by tivoli which has umount all and varyoffvg command for some of my apps vgs. But at later after the backup activity theres another script which varyons the same vgs and mounts my application filesystems.
So now the question arises is when the 1st script runs umount all why /home does get in closed/synced mode? though it is system related filesystem?
Unmount them manually and then use whatever the script uses to mount them, probably mount -a and see if it fails. Sometimes /etc/filesystems needs pruning, though that's usually when you have nested filesystems like /home and /home/fsname, etc.
Hi Everyone,
Good day.
Scenario:
2 unix servers -- A (SunOS) and B (AIX)
I have an ftp script to sftp 30 files from A to B which happen almost instantaneously i.e 30 sftp's happen at the same time.
Some of these sftp's fail with the following error:
ssh_exchange_identification: Connection... (1 Reply)
Hi, i have just installed Sun Solaris 8 on the sun sparc server but when i run the browers and write a site to open it my browers automatically closed and when i run the browers window again it again automatically closed again.
can any body help me to resolve this issue (1 Reply)
Hi
How can I only print the file systems that are more than 95% full.
I used the df -k output and tried to check for each file system and then print only the ones that meet the criteria... But my solution seems cloodgie ... (3 Replies)
my partner change the server's ip address and now i can't to mount the oracle's filesystem, what i do? i don't want to reinstall Unix. My unix is SCO UNIX 5.0.5 (9 Replies)