Sponsored Content
Operating Systems AIX HACMP and Filesystems question Post 302239023 by shockneck on Monday 22nd of September 2008 06:35:49 PM
Old 09-22-2008
Quote:
Originally Posted by mhenryj
[...]I have hacmp running on 2 nodes. I wish to add a filesystem to the primary node to and existing shared VG. Not sure how to do this in a cluster environment.
[...]
A detailed description would be based on information about the FS and VG type and the HACMP version used. Blind guess: try to create the FS on existing disks in an existing shared standard VG with CSPOC:
# smitty cl_lvm
-> Shared File Systems
-> Add a Journaled File System...
CSPOC will (should... ) promote the changes to all nodes in the resource group of the shared VG.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Filesystems

my partner change the server's ip address and now i can't to mount the oracle's filesystem, what i do? i don't want to reinstall Unix. My unix is SCO UNIX 5.0.5 (9 Replies)
Discussion started by: marun
9 Replies

2. UNIX for Advanced & Expert Users

filesystems resizing

I want to resize my filesystem partitions. Reason is that I have 11GB of disk space unused by Unix which divvy reveals. Is there a way I could resize my filesystems without doing a reinstallation. The secondary problem is that the boot image is too large for a diskette (5MB). I'm running SCO... (10 Replies)
Discussion started by: sshokunbi
10 Replies

3. Shell Programming and Scripting

Filesystems GT 95%

Hi How can I only print the file systems that are more than 95% full. I used the df -k output and tried to check for each file system and then print only the ones that meet the criteria... But my solution seems cloodgie ... (3 Replies)
Discussion started by: YS2002
3 Replies

4. AIX

Aix hacmp cluster question (oracle & sap)

Hello, I was wondering if I have 3 nodes (A, B, C) all configured to startup with HACMP, but I would like to configure HACMP in such a way: 1) Node B should startup first. After the cluster successfully starts up and mounts all the filesystems, then 2) Node A, and Node C should startup ! ... (4 Replies)
Discussion started by: filosophizer
4 Replies

5. AIX

hacmp and disaster recovery question

Hi Guys, is it possible to failover a hacmp cluster in one datacentre via SRDF to a single node in another datacentre, or do I need a cluster there in any case? This is only meant as worst case scenario and my company doesn't want to spend more money than absolutely necessary. I know the... (3 Replies)
Discussion started by: zxmaus
3 Replies

6. AIX

Question about HACMP for active-active mode

Hi all, I am new to HACMP. So sorry for the newie question. But I did search the forum and it seems that no one asks this before. So if a 2-node cluster runs in active-active mode (and the same application), what is the benefit of using HACMP ? If it runs in active-stanby, it is easy to... (9 Replies)
Discussion started by: qiulang
9 Replies

7. AIX

HACMP does not start db2 after failover (db2nodes not getting modified by hacmp)

hi, when I do a failover, hacmp always starts db2 but recently it fails to start db2..noticed the issue is db2nodes.cfg is not modified by hacmp and is still showing primary node..manually changed the node name to secondary after which db2 started immediately..unable to figure out why hacmp is... (4 Replies)
Discussion started by: gkr747
4 Replies

8. Shell Programming and Scripting

filesystems > 70%

I need a scrip that will show me the filesystems that are greater than 70%...but not sure how to filter using the df -h | grep Thank you for your help!! (6 Replies)
Discussion started by: eponcedeleonc
6 Replies

9. AIX

HACMP 4.3 Question

A quick question on HACMP 4.3 We have two old F80 systems running HACMP 4.3 currently there is Box (A) running one side of a warehouse and (B) running another side. They both have the ability to failover to eachother. Currently Box A and B are running on Box A. In between is a Hitach Disk... (2 Replies)
Discussion started by: jaudis
2 Replies

10. AIX

FileSystems under HACMP

Dear Fellows, I'm now working under a HACMP Cluster (version 7.1) with 2 nodes (Node1 active / Node2 passive), and 1 Resource Group on active node (Node1), which is UNMANAGED for boths nodes. So, all VG Data are on Node1. Then I had a JFS2 FileSystem full (located on one of these VG Data... (8 Replies)
Discussion started by: LoLo92
8 Replies
HASTD(8)						    BSD System Manager's Manual 						  HASTD(8)

NAME
hastd -- Highly Available Storage daemon SYNOPSIS
hastd [-dFh] [-c config] [-P pidfile] DESCRIPTION
The hastd daemon is responsible for managing highly available GEOM providers. hastd allows to transparently store data on two physically separated machines connected over the TCP/IP network. Only one machine (cluster node) can actively use storage provided by hastd. This machine is called primary. The hastd daemon operates on block level, which makes it transparent to file systems and applications. There is one main hastd daemon which starts new worker process as soon as a role for the given resource is changed to primary or as soon as a role for the given resource is changed to secondary and remote (primary) node will successfully connect to it. Every worker process gets a new process title (see setproctitle(3)), which describes its role and resource it controls. The exact format is: hastd: <resource name> (<role>) If (and only if) hastd operates in primary role for the given resource, a corresponding /dev/hast/<name> disk-like device (GEOM provider) is created. File systems and applications can use this provider to send I/O requests to. Every write, delete and flush operation (BIO_WRITE, BIO_DELETE, BIO_FLUSH) is sent to the local component and replicated on the remote (secondary) node if it is available. Read operations (BIO_READ) are handled locally unless an I/O error occurs or the local version of the data is not up-to-date yet (synchronization is in progress). The hastd daemon uses the GEOM Gate class to receive I/O requests from the in-kernel GEOM infrastructure. The geom_gate.ko module is loaded automatically if the kernel was not compiled with the following option: options GEOM_GATE The connection between two hastd daemons is always initiated from the one running as primary to the one running as secondary. When the pri- mary hastd is unable to connect or the connection fails, it will try to re-establish the connection every few seconds. Once the connection is established, the primary hastd will synchronize every extent that was modified during connection outage to the secondary hastd. It is possible that in the case of a connection outage between the nodes the hastd primary role for the given resource will be configured on both nodes. This in turn leads to incompatible data modifications. Such a condition is called a split-brain and cannot be automatically resolved by the hastd daemon as this will lead most likely to data corruption or loss of important changes. Even though it cannot be fixed by hastd itself, it will be detected and a further connection between independently modified nodes will not be possible. Once this situation is manually resolved by an administrator, the resource on one of the nodes can be initialized (erasing local data), which makes a connection to the remote node possible again. Connection of the freshly initialized component will trigger full resource synchronization. A hastd daemon never picks its role automatically. The role has to be configured with the hastctl(8) control utility by additional software like ucarp or heartbeat that can reliably manage role separation and switch secondary node to primary role in case of the primary's failure. The hastd daemon can be started with the following command line arguments: -c config Specify alternative location of the configuration file. The default location is /etc/hast.conf. -d Print or log debugging information. This option can be specified multiple times to raise the verbosity level. -F Start the hastd daemon in the foreground. By default hastd starts in the background. -h Print the hastd usage message. -P pidfile Specify alternative location of a file where main process PID will be stored. The default location is /var/run/hastd.pid. FILES
/etc/hast.conf The configuration file for hastd and hastctl(8). /var/run/hastctl Control socket used by the hastctl(8) control utility to communicate with hastd. /var/run/hastd.pid The default location of the hastd PID file. EXIT STATUS
Exit status is 0 on success, or one of the values described in sysexits(3) on failure. EXAMPLES
Launch hastd on both nodes. Set role for resource shared to primary on nodeA and to secondary on nodeB. Create file system on /dev/hast/shared provider and mount it. nodeB# hastd nodeB# hastctl role secondary shared nodeA# hastd nodeA# hastctl role primary shared nodeA# newfs -U /dev/hast/shared nodeA# mount -o noatime /dev/hast/shared /shared SEE ALSO
sysexits(3), geom(4), hast.conf(5), ggatec(8), ggated(8), ggatel(8), hastctl(8), mount(8), newfs(8), g_bio(9) AUTHORS
The hastd was developed by Pawel Jakub Dawidek <pjd@FreeBSD.org> under sponsorship of the FreeBSD Foundation. BSD
February 1, 2010 BSD
All times are GMT -4. The time now is 08:46 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy