I wrote script for importing massive amount of volume groups on HP-UX. This is related to ServiceGuard and making sure the alternate nodes have the correct VG information after a change to the primary. Everything works great except for one problem, the paths do not match up between the nodes so that c19t5d0 on the primary looks like c36t5d0 on the alternate. I tried to solve this by using the vgimport -s option but this brought in all the PV links too. Because we use Powerpath we do not want the PV links.
This is the part I am stuck on. What I want is to be able to create a disk file for use with vgimport -f using the lvmtab after doing the vgimport -s. I will then export and reimport again using the correct devices. The script I've got already is big enough so I'd like this part to be as efficient as possible.
For example I want to find all of the disks belonging to vgabc but I do not want the PV links:
Should output only as based on variable <vgabc>:
/dev/dsk/c8t9d1
/dev/dsk/c8t9d3
so that I can write this to the disks file. The first thing I'll have to do is figure out which devices only belong to <vgabc>. Next figure out which of those devices are redundant and only print each disk once without the alternate paths. Also, several VGs have more than one PV link so I cannot assume that I can just cut the number of devices in half.
I don't have access to an HP box anymore, but I think you just remove lvmtab and run vgscan. vgscan should see any attached disks, create the volume group device files and rebuild lvmtab. It will find alternate links (I think). But if the atlernate links are not connected, it can't find them.
Well, see, that's the problem. The alternate paths are connected so when I do a vgscan I get all the devices. Because of EMC's Powerpath we do not want the alternate paths in the lvmtab.
Thank you for the reply Joeyg but I think you might have missed something in my post.
In the lvmtab you will find a disk such as:
/dev/dsk/c8t13d0
If there is an alternate path to this same disk it may show as:
/dev/dsk/c9t13d0
This is the same disk but not the same device file. I am not looking for duplicate lines in this file but need to figure out which device files are for the same disk. Basically, for every VG there will be a number of unique device files. The number of devices is too large because there are multiple paths to the same disk. I need to work one VG at a time and this is not simply a text file I can edit. So I will need to identify via a variable which VG I want to work on, figure out which devices belong to it based on this output and then write only the unique disks, not devices, to a file.
I'm new to utilities like socat and netcat and I'm not clear if they will do what I need.
I have a "compileDeployStartWebServer.sh" script and a "StartBrowser.sh" script that are started by emacs/elisp at the same time in two different processes.
I'm using Cygwin bash on Windows 10.
My... (3 Replies)
Hello all,
I am facing a weird issue while executing a code below -
#!/bin/bash
cd /wload/baot/home/baotasa0/sandboxes_finance/ext_ukba_bde/pset
sh UKBA_publish.sh UKBA 28082015 3
if
then
echo "Param file conversion for all the areas are completed, please check in your home directory"... (2 Replies)
I am trying to call a script(callingscript.sh) from a master script(masterscript.sh) to get string type value from calling script to master script. I have used scripts mentioned below.
#masterscript.sh
./callingscript.sh
echo $fileExist
#callingscript.sh
echo "The script is called"... (2 Replies)
I am using blow script :--
#!/bin/bash
FIND=$(ps -elf | grep "snmp_trap.sh" | grep -v grep) #check snmp_trap.sh is running or not
if
then
# echo "process found"
exit 0;
else
echo "process not found"
exec /home/Ketan_r /snmp_trap.sh 2>&1 & disown -h ... (1 Reply)
Hi guys
I have a shell script that executes sql statemets and sends the output to a file.the script takes in parameters executes sql and sends the result to an output file.
#!/bin/sh
echo " $2 $3 $4 $5 $6 $7
isql -w400 -U$2 -S$5 -P$3 << xxx
use $4
go
print"**Changes to the table... (0 Replies)