How to extend a disk in veritas volume manager in veritas cluster?
Hi Experts,
I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system.
Node2:-
Node1:-
Inside node1 there is zone1 is in configured state and /prod doesn't mounted
Now luns are added in servers and wanted to know if i grow and add disk in node2 will it expand?
Do i need to reboot the zone to get new values of /prod file system.
please someone suggest and give me the commands.
Hi all,
I have a problem with vxvm volume which is mirror with two disks. when i am try to increase file system, it is throwing an ERROR: can not allocate 5083938 blocks, ERROR: can not able to run vxassist on this volume.
Please find a sutable solutions.
Thanks and Regards
B. Nageswar... (0 Replies)
hy guys
I am new at this thread , i have installed sf 5.0 and wanted to encapsulate root disk but when i get to optionn to enter private region i get this error:
Enter desired private region length
(default: 65536) 512
VxVM ERROR V-5-2-338
The encapsulation operation failed with the... (2 Replies)
Hi all,
Anybody know the URLs of veritas volume manager disk problems,volume problems,root disk problems ...etc.
Please share the URL's. i really appreciate for cooperation.
regards
krishna (4 Replies)
I am trying to build a veritas volume similar to an existing volume on another server. The output on source server is:
usbtor12# vxprint -hrtg appdg
v anvil_sqlVOL - ENABLED ACTIVE 629145600 SELECT - fsgen
pl anvil_sqlVOL-01 anvil_sqlVOL ENABLED ACTIVE 629145600... (3 Replies)
Hiii,
Can any one sugge me best practices for resizing a veritas voulume with vxfs file system?
I tried doing this
vxassist -g stg shrinkto stgvol 209715200
VxVM vxassist ERROR V-5-1-7236 Shrinking a FSGEN or RAID5 usage type volume can result in loss of data. It is recommended... (1 Reply)
Hello there,
I'm going to describe a situation I've got here... feel free to ask away questions and I'll provide what I can if it'll help me get this answered!
When I do a vxdisk list, I see a disk that VxVM calls "disk4" that is listed as "failed was: c1t9d0s2". When I do a format, I can go... (3 Replies)
I have a machine (5.10 Generic_142900-03 sun4u sparc SUNW,Sun-Fire-V210) that we are upgrading the storage and my task is to mirror what is already on the machine to the new disk. I have the disk, it is labeled and ready but I am not sure of the next steps to mirror the existing diskgroup and... (1 Reply)
Discussion started by: rookieuxixsa
1 Replies
LEARN ABOUT DEBIAN
orte-clean
orte-clean(1) Open MPI orte-clean(1)NAME
orte-clean - Cleans up any stale processes and files leftover from Open MPI jobs.
SYNOPSIS
orte-clean [--verbose]
mpirun --pernode [--host | --hostfile file] orte-clean [--verbose]
OPTIONS
[-v | --verbose] This argument will run the command in verbose mode and print out the universes that are getting cleaned up as well as pro-
cesses that are being killed.
DESCRIPTION
orte-clean attempts to clean up any processes and files left over from Open MPI jobs that were run in the past as well as any currently
running jobs. This includes OMPI infrastructure and helper commands, any processes that were spawned as part of the job, and any temporary
files. orte-clean will only act upon processes and files that belong to the user running the orte-clean command. If run as root, it will
kill off processes belonging to any users.
When run from the command line, orte-clean will attempt to clean up the local node it is run from. When launched via mpirun, it will clean
up the nodes selected by mpirun.
EXAMPLES
Example 1: Clean up local node only.
example% orte-clean
Example 2: To clean up on a specific set of nodes specified on command line, it is recommended to use the pernode option. This will run
one orte-clean for each node.
example% mpirun --pernode --host node1,node2,node3 orte-clean
To clean up on a specific set of nodes from a file.
example% mpirun --pernode --hostfile nodes_file orte-clean
Example 3: Within a resource managed environment like N1GE, SLURM, or Torque. The following example is from N1GE.
First, we see that we have two nodes with two CPUs each.
example% qsh -pe orte 4
example% mpirun -np 4 hostname
node1
node1
node2
node2
Clean up all the nodes in the cluster.
example% mpirun --pernode orte-clean
Clean up a subset of the nodes in the cluster.
example% mpirun --pernode --host node1 orte-clean
SEE ALSO orterun(1), orte-ps(1)1.4.5 Feb 10, 2012 orte-clean(1)