06-22-2011
How about if I do the following will it clean up the stale LUN's
#echo 1 > /sys/block/sdc/device/delete
through
#echo 1 > /sys/block/sdr/device/delete
10 More Discussions You Might Find Interesting
1. Solaris
I have solaris 10 sparc. I installed a Qlogic hba card.
This card is connected on a brocade switch and the brocade is connected on 2 different controllers on a Hitachi disk bay.
I formated 2 luns. On my solaris system, i have 4 disk.
How to configure solaris 10 to fix the dual disk view.
... (4 Replies)
Discussion started by: simquest
4 Replies
2. Solaris
Hi,
I saw your post on the forums about how to setup IP multipathing. I wanted your help on the below situation .
I have 2 servers A and B . Now they should be connected to 2 network switches . S1 and S2.
Now is it possible to have IP Multipathing on each of the servers as follows ?
... (0 Replies)
Discussion started by: maadhuu
0 Replies
3. IP Networking
Hii Every One,
I am working on SUN server installed with Solaris-10,
1) I want to assign a vertual IP to a interface (ce1).
2) And with that vertual IP and one more interface (ce2), I want to do IP
multipathing.
So, kindely help me to achive above tasks by discribing... (2 Replies)
Discussion started by: prashantshukla
2 Replies
4. Solaris
Hi,
Basically the original configuration on my Solaris 9 server was two LUNs configured as Veritas file systems that were connected to a NetApp filer (filer1). These two LUNs are still configured on the server - but are not being used. They are there as a backup just in case the new... (0 Replies)
Discussion started by: sparcman
0 Replies
5. Solaris
Hai
we using emc storage which is conneted to M5000 through san switch.
we asign 13 luns but in server it is showing 22 luns.
i enable the solaris multipathing (MPxIO)
#more /kernel/drv/fp.conf in that file
MPxio-disable=no
#mpathadm list lu
it shows ... (2 Replies)
Discussion started by: joshmani
2 Replies
6. Solaris
Hello,
I turned on the server multipathing:
# uname -a
SunOS caiman 5.10 Generic_141444-09 sun4v sparc SUNW,T5140
stmsboot -D fp -e
And after a reboot the server, multipathing is not enable:
# stmsboot -L
stmsboot: MPxIO is not enabled
stmsboot: MPxIO disabled
# ls /dev/dsk... (4 Replies)
Discussion started by: bieszczaders
4 Replies
7. AIX
Hi,
I know the concept of multipathing, but would like to know how to configure multipathing in AIX.
or software/driver is by default present in AIX??????
How to find out wheather multipathing is configured in AIX?????
Regards,
Manu (4 Replies)
Discussion started by: manoj.solaris
4 Replies
8. Linux
Hi folks.
When issuing a multipath -ll on my server, i see something that is bugging me...
# multipath -ll
2000b0803ce002582 dm-10 Pillar,Axiom 600
size=129G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 0:0:0:4 sdd 8:48 active ready running... (0 Replies)
Discussion started by: Stephan
0 Replies
9. AIX
What is the following output telling me?
fget_config -Av
---dar0---
User array name = 'BSNorth-DS4300'
dac3 ACTIVE dacNONE ACTIVE
Disk DAC LUN Logical Drive
hdisk4 dac3 0 TestDiskForAll
---dar1---
User array name = 'BSNorth-DS4300'
dac2 ACTIVE dacNONE ACTIVE
Disk DAC ... (0 Replies)
Discussion started by: petervg
0 Replies
10. Red Hat
I have a couple of questions regarding multipath.
If I do vgdisplay vg01, I see it is using 1 PV: /dev/dm-13
If I type multipath -ll I see dm-9, dm-10, dm-11, dm-12, but do not see dm-13. Is my vg01 multipathed? How can I actually know for sure?
Secondly, let's say this time vg01 says... (1 Reply)
Discussion started by: keelba
1 Replies
LEARN ABOUT DEBIAN
globus-job-clean
GLOBUS-JOB-CLEAN(1) GRAM5 Commands GLOBUS-JOB-CLEAN(1)
NAME
globus-job-clean - Cancel and clean up a GRAM batch job
SYNOPSIS
globus-job-clean [-r RESOURCE | -resource RESOURCE]
[-f | -force] [-q | -quiet] JOBID
globus-job-clean [-help] [-usage] [-version] [-versions]
DESCRIPTION
The globus-job-clean program cancels the job named by JOBID if it is still running, and then removes any cached files on the GRAM service
node related to that job. In order to do the file clean up, it submits a job which removes the cache files. By default this cleanup job is
submitted to the default GRAM resource running on the same host as the job. This behavior can be controlled by specifying a resource
manager contact string as the parameter to the -r or -resource option.
By default, globus-job-clean prompts the user prior to canceling the job. This behavior can be overridden by specifying the -f or -force
command-line options.
OPTIONS
The full set of options to globus-job-clean are:
-help, -usage
Display a help message to standard error and exit.
-version
Display the software version of the globus-job-clean program to standard output.
-version
Display the software version of the globus-job-clean program including DiRT information to standard output.
-resource RESOURCE, -r RESOURCE
Submit the clean-up job to the resource named by RESOURCE instead of the default GRAM service on the same host as the job contact.
-force, -f
Do not prompt to confirm job cancel and clean-up.
-quiet, -q
Do not print diagnostics for succesful clean-up. Implies -f
ENVIRONMENT
If the following variables affect the execution of globus-job-clean.
X509_USER_PROXY
Path to proxy credential.
X509_CERT_DIR
Path to trusted certificate directory.
University of Chicago 03/18/2010 GLOBUS-JOB-CLEAN(1)