we are using HP-UX B.11.31 U ia64 HP-UX server. Can you check bellow the top command output whether can point out any abnormality. Becoz i suspect something wrong there,
Consider getting a copy of the HP Tuning and Performance book.
The snapshot from "top" taken out of context doesn't mean a lot.
The percentage shown in "top" for a single process such as "nfsd" (see "man nfsd" to read about the process) is the percentage of one CPU. You have many CPUs.
If you have HP "glance" this is a much better tool for snapshots.
There are many packages for recording historical performance information including unix "sar" and commercial packages from HP. An hour-by-hour view of server performance is more valuable than a snapshot.
The "top" output posted implies a severe shortage of memory and CPU. This is only an implication and we would need better information before making a recommendation.
Whether an apparent shortage actually matters would need detailed analysis of swap statistics and CPU waits. You will need something better than "top".
One of the many advantages of unix over rival Operating Systems is that a correctly tuned server can perform well when theoretically overloaded.
The "glance" program has more options which help find bottlenecks.
Press ? to get the menu or start glance with the option "e.g. glance -t".
In your case these should help:
w (swap activity)
t (system tables) Need to see all pages of this.
m (memory)
B (Global waits)
N (NFS)
First impressions are that you are short of memory but the above glance options should show an overview of what is happening. If you have pseudo-swap configured this can skew the memory figures.
BTW. The vmstat would mean more if we knew the sampling interval (i.e. the command line you typed).
I have added galnce outputs as you mentioned , if you happen to notice anything kindly let me know.
Code:
glance -w
Glance C.04.70.001 10:57:14 S02CJ091 ia64 Current Avg High
------------------------------------------------------------------------------------------------------------------------------------------------------
CPU Util S SNNARU U | 61% 64% 100%
Disk Util F F | 23% 31% 97%
Mem Util S SU U | 92% 92% 93%
Swap Util U UR R | 83% 83% 87%
------------------------------------------------------------------------------------------------------------------------------------------------------
SWAP SPACE Users= 9
Swap Device Type Avail Used Priority
--------------------------------------------------------------------------------
/dev/vg00/lvol2 device 8.0gb 0mb 1
/dev/vg00/lvol13 device 3.9gb 0mb 2
pseudo-swap memory 38.0gb 29.5gb -1
Swap Available: 51104m Swap Used: 30258m Swap Util (%): 83 Reserved: 42450m
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
glance -t
Glance C.04.70.001 10:59:33 S02CJ091 ia64 Current Avg High
------------------------------------------------------------------------------------------------------------------------------------------------------
CPU Util S SNNARU U | 80% 80% 80%
Disk Util F F | 31% 31% 31%
Mem Util S SU U | 92% 92% 92%
Networkil U UR R | 84% 84% 84%
------------------------------------------------------------------------------------------------------------------------------------------------------
SYSTEM TABLES REPORT Users= 9
System Table Available Used Utilization High(%)
--------------------------------------------------------------------------------
Proc Table (nproc) 8212 685 8 8
File Table (nfile) 2147483647 9349 0 0
Shared Mem Table (shmmni) 1024 31 3 3
Message Table (msgmni) 2014 3 0 0
Semaphore Table (semmni) 10070 52 1 1
File Locks (nflocks) 10000 213 2 2
Pseudo Terminals (npty) 512 0 0 0
Buffer Headers (nbuf) na 9961 na na
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
glance -m
Glance C.04.70.001 11:01:33 S02CJ091 ia64 Current Avg High
------------------------------------------------------------------------------------------------------------------------------------------------------
Cpu Util S SN NRU U | 94% 97% 100%
Disk Util F FV | 37% 35% 37%
Mem Util S SU U | 93% 93% 93%
Networkil U UR R | 87% 87% 87%
------------------------------------------------------------------------------------------------------------------------------------------------------
MEMORY REPORT Users= 9
Event Current Cumulative Current Rate Cum Rate High Rate
--------------------------------------------------------------------------------
Page Faults 140583 548128 8841.6 9929.8 10982.2
Page In 135 254 8.4 4.6 2790.3
Page Out 0 4 0.0 0.0 0.0
KB Paged In 540kb 1016kb 33.9 18.4 135.6
KB Paged Out 0kb 16kb 0.0 0.2 0.7
Reactivations 0 0 0.0 0.0 0.0
Deactivations 0 0 0.0 0.0 0.0
KB Deactivated 0kb 0kb 0.0 0.0 0.0
VM Reads 140 273 8.8 4.9 8.8
VM Writes 0 0 0.0 0.0 0.0
Total VM : 31.7gb Sys Mem : 12.2gb User Mem: 19.4gb Phys Mem : 40.0gb
Active VM: 23.4gb Buf Cache: 1mb Free Mem: 2.7gb FileCache: 5.7gb
MemFS Blk Cnt: 0 MemFS Swp Cnt: 0
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
glance -B
Glance C.04.70.001 11:05:26 S02CJ091 ia64 Current Avg High
------------------------------------------------------------------------------------------------------------------------------------------------------
Cpu Util S SN NARU U | 90% 79% 91%
Disk Util F FV | 23% 24% 30%
Mem Util S SU U | 94% 93% 94%
Networkil U UR R | 88% 88% 88%
------------------------------------------------------------------------------------------------------------------------------------------------------
GLOBAL WAIT STATES Users= 9
Procs/ Procs/
Event % Time Threads Blocked On % Time Threads
--------------------------------------------------------------------------------
IPC 0.3 128.41 11.0 Cache 0.0 7.45 0.6
Job Control 0.0 0.37 0.0 CDROM IO 0.0 0.00 0.0
Message 0.0 0.00 0.0 Disk IO 0.0 0.00 0.0
Pipe 0.2 93.95 8.1 Graphics 0.0 0.00 0.0
RPC 0.0 0.00 0.0 Inode 0.0 0.98 0.1
Semaphore 0.0 0.00 0.0 IO 0.4 158.23 13.6
Sleep 27.7 12297.15 1054.6 LAN 0.0 0.00 0.0
Socket 6.4 2850.47 244.5 NFS 0.0 0.00 0.0
Stream 0.0 16.68 1.4 Priority 0.2 70.05 6.0
Terminal 0.0 0.00 0.0 System 1.2 527.80 45.3
Other 63.5 28217.66 2420.0 Virtual Mem 0.0 2.21 0.2
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
glance -N
Glance C.04.70.001 11:07:23 S02CJ091 ia64 Current Avg High
------------------------------------------------------------------------------------------------------------------------------------------------------
CPU Util S SN NARU U | 84% 84% 87%
Disk Util F F | 39% 42% 76%
Mem Util S SU U | 93% 93% 93%
Networkil U UR R | 86% 86% 87%
------------------------------------------------------------------------------------------------------------------------------------------------------
NFS GLOBAL ACTIVITY Users= 9
Server (inbound) Client (outbound)
Current Cum Current Cum
--------------------------------------------------------------------------------
Read Rate 635.2 684.4 0.0 0.0
Write Rate 110.2 126.1 0.0 0.0
Read Byte Rate 1244.1 1198.9 0.0 0.0
Write Byte Rate 650.7 830.8 0.0 0.0
NFS Call Count 9898 60879 0 0
Bad Call Count 0 0 0 0
Service Time 4.87 21.67 0.00 0.00
Network Time na na 0.00 0.00
Read/Write Qlen na na 0 0
Idle biods na na na na
Hi.
Using debian 8.0 on a raspberryPI SERVER, accessing nfs from another raspberry gives quick reply.
But from a slackware 14.1 SERVER on a Celeron 2Ghz dual core, is painfully slow and i cannot figure out why.
Can anyone guide me? (2 Replies)
Hi,
I see following 'nfsd' command is using more CPU. Could someone please comment on it's pros and cons of it?
CPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU COMMAND
5 ? 16890 root 152 20 34696K 12036K run 57166:48 856.13 854.64 nfsd
OS -- HP-UX
One... (4 Replies)
Hi all,
I am having a following problem. Trying to run PXE boot server on my OpenBSD machine I have ended up on making NFSd daemon works. On all machines I get an error msg. nfsd : nfsd count is invalid: (null) no matter what computer I run it on. Everything works just well on FreeBSD and linux.... (1 Reply)
Hello,
what is the relation between portmap and nfsd and how communication between them looks like. Does the nfsclient contact with the portmap or nfsd first.
Many thanks in advance for helping me to understand this :)
BR,
p (3 Replies)
hi guys
I installed NFS server and everything started out fine but I don't have /proc/fs/nfsd entry and so I can't mount nfsd. Therefore I can't start my nfs service.
Why don't I have /proc/fs/nfsd? How do I create that?
Thanks (1 Reply)
Hi
Inexplicably, nfsd no longer starts automatically on our Sun boxes running Solaris 9, so that 'automount' no longer functions automatically. The problem first manifested itself when we could not access files on any of the nfs automounted directories in our LAN after one of the servers (say... (19 Replies)