So according to prstat, you have 16 GB of memory. Do you feel there is still a problem to solve ?
I'm running mdb now...Guess I have the older updates....it's taking a while.
How do you figure that I have 16GB of memory from the prstat output.
I'm just wondering if the low swap size (516.xx MB) will the zones be able to use the whole installed 16 GB of memory?
And if I were to increase via the use of swap file, I only need to touch the global zone right?
And if I were to add swap file to my system... should I shut down my containers in order to do so?
Also although my global zone is running UFS... it has a large ZFS that all the containers run on... would you recommend that I create the swap file on the ZFS or UFS filesystem?
You have your 16 GB visible and around half of it is used a ZFS cache, which is fine. You still have RAM available so nothing to worry here.
Quote:
And if I were to add swap file to my system... should I shut down my containers in order to do so?
No, you can add and remove swap on a live system.
Quote:
Also although my global zone is running UFS... it has a large ZFS that all the containers run on... would you recommend that I create the swap file on the ZFS or UFS filesystem?
You might have no choice but to use UFS.
I suspect swap volumes not to be supported on a RAID-Z pool. At least I never saw such a configuration. Same issue with swap files that are likely still unsupported on ZFS.
How do you figure that half of it is ZFS cache?
Is it the ZFS ARC cache that I kept reading about?
I also ran kstat and got this
Looks like it's using 6.5GB in ZFS cache....
Is this normal.... or should I be using
zfs_arc_max and zfs_arc_min to limit how much cache ZFS can use...
And having that the Oracle 8i is running off the ZFS volume... what is better? more available memory
or more ZFS cache... (Sorry.. I know this is a bit OT)
Quote:
Originally Posted by jlliagre
You have your 16 GB visible and around half of it is used a ZFS cache, which is fine. You still have RAM available so nothing to worry here.
Because a kernel doesn't use 9.5 GB so the ZFS ARC is likely using a large part of it.
Current Solaris ::memstat separate the ZFS releated memory from the rest of kernel usage.
Quote:
Is it the ZFS ARC cache that I kept reading about?
Indeed.
Quote:
I also ran kstat and got this
Looks like it's using 6.5GB in ZFS cache....
Is this normal.... or should I be using
zfs_arc_max and zfs_arc_min to limit how much cache ZFS can use...
There is no point limiting the cache size (outside very specific cases).
Quote:
And having that the Oracle 8i is running off the ZFS volume... what is better? more available memory
or more ZFS cache... (Sorry.. I know this is a bit OT)
Unused memory is wasted memory. More of the memory is used for cache, better the performance.
Because a kernel doesn't use 9.5 GB so the ZFS ARC is likely using a large part of it.
Current Solaris ::memstat separate the ZFS releated memory from the rest of kernel usage.
Indeed.There is no point limiting the cache size (outside very specific cases).
Unused memory is wasted memory. More of the memory is used for cache, better the performance.
Well... the whole point of my investigation is to find out if I can coax out more
performance out of my current setup. I like to have the oracle zone (zone3)
use more memory to improve performance....
Also ZFS does tend to slow down dramatically after the system has been up
for a while... to the point that login onto the shell would take some 10
seconds... any access to the disk takes longer to respond..etc.
I need to increase the /var (UFS) filesystem and root disk under veritas control or root disk is encapsulated
# df -k /var
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/var 13241195 12475897 674524 96% /var
# fstyp /dev/vx/dsk/var
ufs
# pkginfo... (1 Reply)
Hi
I need to add new slice to existing concatenated volume. Please let me know the process to do the same
I have d0 concatenated volume which consist of c1t0d0s7 & c1t2d0s0
bash-3.00# df -h /export/home
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d0 ... (3 Replies)
Dear All,
How to increase the swap size when physicall memory reaches 60 %. OR it can be only done after the physicall memory is full.
Rgds
Rj (8 Replies)
Hello, I am new to Solaris so i apologize upfront if my questions seem trivial.
I am trying to install a ZFS file system on a Solaris 10 machine with UFS already installed on it.
I want to run: # zpool create pool_zfs c0t0d0
then: # zfs create pool_zfs/fs
My question is more to... (3 Replies)
Hi Gurus
I want to know the command & tips regarding, how to increase or decrease inode number of the particular ufs filesystem. Is it possible to do it in a live/production environment.
Regards (3 Replies)
I'm interested in adding more swap space to my current workstation (Solaris 10). I currently have 2 hard drives installed, however the system was only created with 512MB for swap that resides on drive 1. This drive is already sliced up and all slices are being used.
The second drive has two... (3 Replies)
Hi..
i am using sun solaris...and this is the filesystem information...
you can see th slice(swap) c0t0d0s1 is giving some absord information......and during rebooting it is asking to run fsck mnually..when i run fsck manually it is giving error incorrect starting and end header...smthing like... (1 Reply)