Sponsored Content
Full Discussion: Re-cluster 2 HACMP 5.2 nodes
Operating Systems AIX Re-cluster 2 HACMP 5.2 nodes Post 302829891 by bakunin on Saturday 6th of July 2013 09:04:47 PM
Old 07-06-2013
Quote:
Originally Posted by elcounto
Problem is there are now 3 VG's that have been set-up outside of the cluster.
First off, i suggest updating the OS to 5.3 if possible anyhow. 5.2 is not only outdated and no longer supported, it is also a nightmare of bugs and most of them are corrected in 5.3.

Regarding your question: you will need downtime for this: change the VGs to "enhanced concurrent", then varyoff them on their original system and do a "learning import" on the other system. (Corollary: do yourself a favour and make sure you have the same major number for the VGs across all your systems. This makes administration easier.)

Define your new resource groups based on these VGs then and do a "extended verification and synchronisation" via C-SPOC.

I hope this helps.

bakunin
 

10 More Discussions You Might Find Interesting

1. Emergency UNIX and Linux Support

Rebooting 3 to 1 Cluster nodes.

hello Gurus, My current set up is 3 to 1 Cluster (SUN Cluster 3.2) running oracle database. Task is to reboot the servers. My query is about the procedure to do the same. My understanding is suspend the databases to avoid switchover. Then execute the command scshutdown to down the cluster... (4 Replies)
Discussion started by: EmbedUX
4 Replies

2. Shell Programming and Scripting

Run script on HACMP nodes?

Hello All, Anybody knows how can I run script on the AIX HACMP offline node, without logon a offline node? I would like to run a script on the online node and at same time or after the online node on the offline node. Any IDEA? :confused: (3 Replies)
Discussion started by: kalaso
3 Replies

3. AIX

Make system backup for 2 nodes HACMP cluster

Hi all, I was wondering if someone direct me in how to Make system backup for 2 nodes HACMP cluster ( system image ) . What are the consideration for this task (3 Replies)
Discussion started by: h@foorsa.biz
3 Replies

4. Solaris

What is the procedure to reboot cluster nodes

Hi we have 2 solaris 10 servers in veritas cluster. also we have oracle cluster on the database end. now we have a requirement to reboot both the servers as it has been running for more than a year. can any one tell what is the procedure to bring down the cluster services in both the nodes... (7 Replies)
Discussion started by: newtoaixos
7 Replies

5. Red Hat

How to troubleshoot a 1000 nodes Apache cluster?

Hi all. May I get some expert advice on troubleshooting performance issues of a 1000 nodes Apache LB cluster. Users report slow loading/response of webpages. Different websites are hosted on this cluster for different clients. But all are reporting the same issue. Could you please let me know... (1 Reply)
Discussion started by: admin_xor
1 Replies

6. AIX

[Howto] Update AIX in HACMP cluster-nodes

As i have updated a lot of HACMP-nodes lately the question arises how to do it with minimal downtime. Of course it is easily possible to have a downtime and do the version update during this. In the best of worlds you always get the downtime you need - unfortunately we have yet to find this best of... (4 Replies)
Discussion started by: bakunin
4 Replies

7. UNIX for Advanced & Expert Users

Arbitrator for 2 nodes ocfs cluster

Is there any way to create a arbitrary node for ocfs2 on a virtual machine (others are physical servers) so it won't go panic when one of physical server goes down? This is for load balanced application servers. Any setting example or tips? Thanks. (0 Replies)
Discussion started by: malayo
0 Replies

8. Red Hat

RedHat Cluster: Nodes won't see each other

Hi All; I try to build a Redhat Cluster (CentOS 6) on vmware. But each node sees the other down like: # clustat Cluster Status for mycluster @ Wed Apr 8 11:01:38 2015 Member Status: Quorate Member Name ID Status ------ ---- ... (0 Replies)
Discussion started by: Meacham12
0 Replies

9. Red Hat

RedHat Cluster: Nodes won't see each other

Hi All; I try to build a Redhat Cluster (CentOS 6) on vmware. But each node sees the other down like: # clustat Cluster Status for mycluster @ Wed Apr 8 11:01:38 2015 Member Status: Quorate Member Name ID Status ------ ---- ... (1 Reply)
Discussion started by: Meacham12
1 Replies

10. AIX

HACMP - two nodes - take too long to sync

HI Admin, I have running 2 node HACMP cluster- AIX 6.1. I just set it up. It does sync completely without any errors. But it take too long to sync. More than 30 mins... any reasons? Where can I start looking into ? Same network, same Subnet (1 Reply)
Discussion started by: snchaudhari2
1 Replies
LVM(8)							      System Manager's Manual							    LVM(8)

NAME
lvm - Linux Logical Volume Manager DESCRIPTION
lvm is a logical volume manager for Linux. It enables you to concatenate several physical volumes (hard disks etc.) into a so called vol- ume group (VG, see pvcreate(8) and vgcreate(8) ) forming a storage pool, like a virtual disk. IDE, SCSI disks as well as multiple devices (MD) are supported. The storage capacity of a volume group can be divided into logical volumes (LVs), like virtual disk partitions. The size of a logical volume is in multiples of physical extents (PEs, see lvcreate(8) ). The size of the physical extents can be configured at volume group creation time. If a logical volume is too small or too large you can change its size at runtime ( see lvextend(8) and lvreduce(8) ). lvcreate(8) can be used to create snapshots of existing logical volumes (so called original logical volumes in this context) as well. Creating a snapshot logical volumes grants access to the contents of the original logical volume it is associated with and exposes the read only contents at the creation time of the snapshot. This is useful for backups or for keeping several versions of filesystems online. If you run out of space in a volume group it is possible to add one or more pvcreate'd disks to the system and put them into an existing volume group ( see vgextend(8) ). The space on these new physical volumes can be dynamically added to logical volumes in that volume group ( see lvextend(8) ). To remove a physical volume from the system you can move allocated logical extents to different physical volumes ( see pvmove(8) ). After the pvmove the volume group can be reduced with the vgreduce(8) command. Inactive volume groups must be activated with vgchange(8) before use. vgcreate(8) automatically activates a newly created volume group. Abbreviations PV for physical volume, PE for physical extent, VG for volume group, LV for logical volume, and LE for logical extent. Command naming convention All command names corresponding to physical volumes start with pv, all the ones concerned with volume groups start with vg and all for log- ical volumes with lv. General purpose commands for the lvm as a whole start with lvm. VGDA
The volume group descriptor area (or VGDA for short) holds the necessary metadata to handle the LVM functionality. It is stored at the beginning of each pvcreate'd disk. It contains four parts: one PV descriptor, one VG descriptor, the LV descriptors and several PE descriptors. LE descriptors are derived from the PE ones at vgchange(8) time. Automatic backups of the VGDA are stored in files in /etc/lvmconf/ (please see vgcfgbackup(8)/vgcfgrestore(8) too). Take care to include these files in your regular (tape) backups as well. Limits Currently up to 99 volume groups with a grand total of 256 logical volumes can be created. The limit for the logical volumes is not caused by the LVM but by Linux 8 bit device minor numbers. This means that you can have 99 volume groups with 1-3 logical volumes each or on the other hand 1 volume group with up to 256 logical vol- umes or anything in between these extreme examples. Depending on the physical extent size specified at volume group creation time (see vgcreate(8) ), logical volumes of between a maximum of 512 Megabytes and 1 Petabyte can be created. Actual Linux kernels on IA32 limit these lvm possibilities to a maximum of 2 Terabytes per logical and per physical volume as well. This enables you to have as much as 256 Terabytes under LVM control with all possible 128 scsi disk subsystems. You can have up to 65534 logical extents (on IA32) in a logical volume at the cost of 1 Megabyte in kernel memory. Phys- ical volumes can have up to 65534 physical extents. /proc filesystem support The operational state of active volume groups with their physical and logical volumes can be found in the /proc/lvm/ directory. /proc/lvm/global contains a summary of all available information regarding all VGs, LVs and PVs. The two flags for PV status in brackets mean A/I for active/inactive and A/N for allocatable or non-allocatable. The four flags for LV status in brackets mean A/I for active/inactive, R/W for read-only or read/write, D/C for discontiguous or contiguous and L/S for linear or striped. S can optionally be followed by the number of stripes in the set. At /proc/lvm/VGs/ starts a subdirectory hierarchy containing information about every VG in a different subdirectory named /proc/lvm/VGs/VolumeGroupName where VolumeGroupName stands for an arbitrary VG name. /proc/lvm/VGs/Vol- umeGroupName/ in turn holds a file group containing summary information for the VG as a total. /proc/lvm/VGs/VolumeGroupName/LVs/Logi- calVolumeName holds information for an arbitrary LV named LogicalVolumeName /proc/lvm/VGs/VolumeGroupName/PVs/PhysicalVolumeName contains information for an arbitrary PV named PhysicalVolumeName. All of the information in the files below /proc/lvm/VGs/ is presented in attribute/value pairs to be easyly parsable. Examples We have disk partitions /dev/sda3, /dev/sdb1 and /dev/hda2 free for use and want to create a volume group named "test_vg". Steps required: 1. Change partition type for these 3 partitions to 0x8e with fdisk. (see pvcreate(8): 0x8e identifies LVM partitions) 2. pvcreate /dev/sda3 /dev/sdb1 /dev/hda2 3. vgcreate test_vg /dev/sda3 /dev/sdb1 /dev/hda2 With our volume group "test_vg" now online, we can create logical volumes. For example a logical volume with a size of 100MB and standard name (/dev/test_vg/lvol1) and another one named "my_test_lv" with size 200MB striped (RAID0) across all the three physical volumes. Steps required: 1. lvcreate -L 100 test_vg 2. lvcreate -L 200 -n my_test_lv -i 3 test_vg Now let's rock and roll. For example create a file system with "mkfs -t ext2 /dev/test_vg/my_test_lv" and mount it with "mount /dev/test_vg/my_test_lv /usr1" See also e2fsadm(8), lvchange(8), lvcreate(8), lvdisplay(8), lvextend(8), lvmchange(8), lvmdiskscan(8), lvmcreate_initrd(8), lvmsadc(8), lvmsar(8), lvreduce(8), lvremove(8), lvrename(8), lvscan(8), pvchange(8), pvcreate(8), pvdata(8), pvdisplay(8), pvmove(8), pvscan(8), vgcfgbackup(8), vgcfgrestore(8), vgchange(8), vgck(8), vgcreate(8), vgdisplay(8), vgexport(8), vgextend(8), vgimport(8), vgmerge(8), vgmknodes(8), vgreduce(8), vgremove(8), vgrename(8), vgscan(8), vgsplit(8) AUTHOR
Heinz Mauelshagen <Linux-LVM@Sistina.com> Heinz Mauelshagen LVM TOOLS LVM(8)
All times are GMT -4. The time now is 03:57 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy