Sponsored Content
Full Discussion: How to Cleanup Multipathing
Operating Systems Linux Red Hat How to Cleanup Multipathing Post 302532905 by Tirmazi on Wednesday 22nd of June 2011 10:48:05 AM
Old 06-22-2011
How about if I do the following will it clean up the stale LUN's

#echo 1 > /sys/block/sdc/device/delete
through
#echo 1 > /sys/block/sdr/device/delete
 

10 More Discussions You Might Find Interesting

1. Solaris

solaris multipathing

I have solaris 10 sparc. I installed a Qlogic hba card. This card is connected on a brocade switch and the brocade is connected on 2 different controllers on a Hitachi disk bay. I formated 2 luns. On my solaris system, i have 4 disk. How to configure solaris 10 to fix the dual disk view. ... (4 Replies)
Discussion started by: simquest
4 Replies

2. Solaris

Solaris IP Multipathing

Hi, I saw your post on the forums about how to setup IP multipathing. I wanted your help on the below situation . I have 2 servers A and B . Now they should be connected to 2 network switches . S1 and S2. Now is it possible to have IP Multipathing on each of the servers as follows ? ... (0 Replies)
Discussion started by: maadhuu
0 Replies

3. IP Networking

Assigning a vertual IP and doing IP multipathing...

Hii Every One, I am working on SUN server installed with Solaris-10, 1) I want to assign a vertual IP to a interface (ce1). 2) And with that vertual IP and one more interface (ce2), I want to do IP multipathing. So, kindely help me to achive above tasks by discribing... (2 Replies)
Discussion started by: prashantshukla
2 Replies

4. Solaris

Veritas Multipathing problem.

Hi, Basically the original configuration on my Solaris 9 server was two LUNs configured as Veritas file systems that were connected to a NetApp filer (filer1). These two LUNs are still configured on the server - but are not being used. They are there as a backup just in case the new... (0 Replies)
Discussion started by: sparcman
0 Replies

5. Solaris

Solaris multipathing

Hai we using emc storage which is conneted to M5000 through san switch. we asign 13 luns but in server it is showing 22 luns. i enable the solaris multipathing (MPxIO) #more /kernel/drv/fp.conf in that file MPxio-disable=no #mpathadm list lu it shows ... (2 Replies)
Discussion started by: joshmani
2 Replies

6. Solaris

Multipathing - problem

Hello, I turned on the server multipathing: # uname -a SunOS caiman 5.10 Generic_141444-09 sun4v sparc SUNW,T5140 stmsboot -D fp -e And after a reboot the server, multipathing is not enable: # stmsboot -L stmsboot: MPxIO is not enabled stmsboot: MPxIO disabled # ls /dev/dsk... (4 Replies)
Discussion started by: bieszczaders
4 Replies

7. AIX

Multipathing in AIX

Hi, I know the concept of multipathing, but would like to know how to configure multipathing in AIX. or software/driver is by default present in AIX?????? How to find out wheather multipathing is configured in AIX????? Regards, Manu (4 Replies)
Discussion started by: manoj.solaris
4 Replies

8. Linux

Linux multipathing issue...

Hi folks. When issuing a multipath -ll on my server, i see something that is bugging me... # multipath -ll 2000b0803ce002582 dm-10 Pillar,Axiom 600 size=129G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 0:0:0:4 sdd 8:48 active ready running... (0 Replies)
Discussion started by: Stephan
0 Replies

9. AIX

Problems setting up multipathing

What is the following output telling me? fget_config -Av ---dar0--- User array name = 'BSNorth-DS4300' dac3 ACTIVE dacNONE ACTIVE Disk DAC LUN Logical Drive hdisk4 dac3 0 TestDiskForAll ---dar1--- User array name = 'BSNorth-DS4300' dac2 ACTIVE dacNONE ACTIVE Disk DAC ... (0 Replies)
Discussion started by: petervg
0 Replies

10. Red Hat

Verify multipathing

I have a couple of questions regarding multipath. If I do vgdisplay vg01, I see it is using 1 PV: /dev/dm-13 If I type multipath -ll I see dm-9, dm-10, dm-11, dm-12, but do not see dm-13. Is my vg01 multipathed? How can I actually know for sure? Secondly, let's say this time vg01 says... (1 Reply)
Discussion started by: keelba
1 Replies
NFSUSERD(8)						    BSD System Manager's Manual 					       NFSUSERD(8)

NAME
nfsuserd -- load user and group information into the kernel for NFSv4 services SYNOPSIS
nfsuserd [-domain domain_name] [-usertimeout minutes] [-usermax max_cache_size] [-verbose] [-force] [num_servers] DESCRIPTION
nfsuserd loads user and group information into the kernel for NFSv4. It must be running for NFSv4 to function correctly, either client or server. Upon startup, it loads the machines DNS domain name, plus timeout and cache size limit into the kernel. It then preloads the cache with group and user information, up to the cache size limit and forks off N children (default 4), that service requests from the kernel for cache misses. The master server is there for the sole purpose of killing off the slaves. To stop the nfsuserd, send a SIGUSR1 to the master server. The following options are available: -domain domain_name This option allows you to override the default DNS domain name, which is acquired by taking either the suffix on the machine's host- name or, if that name is not a fully qualified host name, the canonical name as reported by getaddrinfo(3). -usertimeout minutes Overrides the default timeout for cache entries, in minutes. If the timeout is specified as 0, cache entries never time out. The longer the time out, the better the performance, but the longer it takes for replaced entries to be seen. If your user/group database management system almost never re-uses the same names or id numbers, a large timeout is recommended. The default is 1 minute. -usermax max_cache_size Overrides the default upper bound on the cache size. The larger the cache, the more kernel memory is used, but the better the perfor- mance. If your system can afford the memory use, make this the sum of the number of entries in your group and password databases. The default is 200 entries. -verbose When set, the server logs a bunch of information to syslog. -force This flag option must be set to restart the daemon after it has gone away abnormally and refuses to start, because it thinks nfsuserd is already running. num_servers Specifies how many servers to create (max 20). The default of 4 may be sufficient. You should run enough servers, so that ps(1) shows almost no running time for one or two of the slaves after the system has been running for a long period. Running too few will have a major performance impact, whereas running too many will only tie up some resources, such as a process table entry and swap space. SEE ALSO
getgrent(3), getpwent(3), nfsv4(4), group(5), passwd(5), nfsd(8) HISTORY
The nfsuserd utility was introduced with the NFSv4 experimental subsystem in 2009. BUGS
The nfsuserd use getgrent(3) and getpwent(3) library calls to resolve requests and will hang if the servers handling those requests fail and the library functions don't return. See group(5) and passwd(5) for more information on how the databases are accessed. BSD
April 25, 2009 BSD
All times are GMT -4. The time now is 03:07 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy