Sponsored Content
Operating Systems AIX GPFS initial setup, but perhaps this is a SAN/VIO question Post 302739459 by kneemoe on Tuesday 4th of December 2012 08:35:58 AM
Old 12-04-2012
I'll have to get some info from the SAN guys, all I know about their end of things is that they're handing off the disk to the HBAs assigned to the 795 (VIOs), what is it that I should be asking about the zoning?

Edit: I'm obviously still learning quite a bit - I wasn't aware that I should/could make a second vdev using the same backing device to share out the LUN through the VIOs to more than one LPAR. I've now gone ahead and done just that, and both LPARs can see the disks I plan to use with GPFS, and I think I'm now ready to start building the cluster. Thanks all the same!

Last edited by kneemoe; 12-04-2012 at 02:04 PM..
 

9 More Discussions You Might Find Interesting

1. AIX

AIX virtualization: IVM initial network setup

I have a system model 505 p server. I am trying to virtualize my server. My network admin has given me 10 IP addresses (with the DNS already updated with names that map onto each ip). E.g.: IP1 maps to m1.lab, IP2 maps to m2.lab, etc I have been following the redpaper "Integrated Virtualization... (0 Replies)
Discussion started by: apsaix
0 Replies

2. AIX

VIO SAN STORAGE LPAR ( Dynamically increasing size )

Hi, Is there a way to dynamically increase the size of virtual disk on the LPAR. The virtual disk is coming from my VIO Server. From my SAN I have allocated a disk to VIO Server and from VIO Server to my LPAR....If I increase the space of the logical SAN DISK (DS 4700 using IBM TotalStorage... (0 Replies)
Discussion started by: filosophizer
0 Replies

3. AIX

JS23 VIO setup.

I hope there is someone out there that understands VIO settings on a JS23. The problem I have is the VIO server already exists but I need to increase the number of VSCSI adapters (VHOST). If this was a normal LAPR on a p series using an HMC I would simply create a virtual SCSI adapter and... (3 Replies)
Discussion started by: johnf
3 Replies

4. AIX

AIX GPFS Setup

:cool:Hello, can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses... any help would be appreciated! (2 Replies)
Discussion started by: ollie01
2 Replies

5. AIX

How to configure new hdisk such that it has a gpfs fs on it and is added to a running gpfs cluster?

Hi, I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster. Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Discussion started by: aixromeo
1 Replies

6. Shell Programming and Scripting

initial setup for iconv

hi I am trying iconv on my linux machine for conversion of RUSSIAN to ENGLISH, but i am not able to get exact result. i want to know what initial setting in linux machine we need to do to get desired output I created sample russian file using google translate in CP866 endcoding and full... (5 Replies)
Discussion started by: peeyushgehlot
5 Replies

7. IP Networking

DNS question about initial Master/Slave setup

Hey everyone. I'm creating a DNS master/slave server set up. I have the configurations all done I believe, the master has the required zone file, and the named.conf file has the allow transfer and allow query stuff set. The slave has it's own configs set. My question is that when initially... (1 Reply)
Discussion started by: Lost in Cyberia
1 Replies

8. AIX

New IBM Power8 (S822) and StorWiz V3700 SAN, best practices for production setup/config?

Hello, Got a IBM Power8 box (S822) that I am configuring for replacement of our existing IBM machine. Wanted to touch base with the expert community here to ensure I don't miss anything critical in my setup/config of AIX. Did a fresh AIX 7.1 install on the internal scsi hdisk, mirror'ed... (3 Replies)
Discussion started by: c3rb3rus
3 Replies

9. HP-UX

SAN Migration question

Hi, I am very new to HP-UX, and we're going to be doing a SAN migration. We're going to take down the machine, and zone it to the new SAN. My question is, will the device names change and will that interfere with the LVM? If the new disks come in with different device names, how would I... (3 Replies)
Discussion started by: BG_JrAdmin
3 Replies
ccs_tool(8)															       ccs_tool(8)

NAME
ccs_tool - The tool used to make online updates of CCS config files. SYNOPSIS
ccs_tool [OPTION].. <command> DESCRIPTION
ccs_tool is part of the Cluster Configuration System (CCS). It is used to make online updates to cluster.conf. It can also be used to upgrade old style (GFS <= 6.0) CCS archives to the new xml cluster.conf format. OPTIONS
-h Help. Print out the usage. -V Print the version information. sub-commands have their own options, see below for more detail COMMANDS
addnode [options] <node> [<fenceoption=value>]... Adds a new node to the cluster configuration file. Fencing device options are specified as key=value pairs (as many as required) and are entered into the configuration file as is. See the documentation for your fencing agent for more details (eg a powerswitch fence device may need to know which port the node is connected to). Options: -v <votes> Number of votes for this node (mandatory) -n <nodeid> Node id for this node (optional) -i <interface> Network interface to use for this node. Mandatory if the cluster is using multicast as transport. Forbidden if not. -m <multicast> Multicast address for cluster. Only allowed on the first node to be added to the file. Subsequent nodes will use either multicast or broadcast depending on the properties of the first node. -f <fencedevice> Name of fence device to use for this node. The fence device section must already have been added to the file, probably using the addfence command. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. delnode [options] <node> Delete a node from the cluster configuration file. Note: there is no "edit" command so to change the properties of a node you must delete it and add it back in with the new properties. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. addfence [options] <name> <agent> [<option>=<value>]... Adds a new fence device section to the cluster configuration file. <agent> is the name of the fence agent that controls the device. the options following are entered as key-value pairs. See the fence agent documentation for details about these. eg: you may need to enter the IP address and username/password for a powerswitch fencing device. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. delfence [options] <node> Deletes a fencing device from the cluster configuration file. delfence will allow you to remove a fence device that is in use by nodes. This is to allow changes to be made, but be aware that it may produce an invalid configuration file if you don't add it back in again. Options: -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf -o <file> Output file. Defaults to the same as -c -C Don't run "ccs_tool update" after changing file. This will happen by default if the input file is the same as the output file. -F Force a "ccs_tool update" even if the input and output files are different. lsnode [options] List the nodes in the configuration file. This is (hopefully obviously) not necessarily the same as the nodes currently in the clus- ter, but it should be a superset. Options: -v Verbose. Lists all the properties of the node, and the node-specific properties of the fence device too. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf lsfence [options] List all the fence devices in the cluster configuration file. Options: -v Verbose. Lists all the properties of the fence device rather than just the names and agents. -c <file> Config file to use. Defaults to /etc/cluster/cluster.conf create [options] <clustername> Create a new, skeleton, configuration file. Note that "create" on its own will not create a valid configuration file. Fence agents and nodes will need to be added to it before handing it over to ccsd. The new configuration file will have a version number of 1. Subsequent addnode/delnode/addfence/delfence operations will increment the version number by 1 each time. Options: -c <file> Config file to create. Defaults to /etc/cluster/cluster.conf addnodeids Adds node ID numbers to all the nodes in cluster.conf. In RHEL4, node IDs were optional and assigned by cman when a node joined the cluster. In RHEL5 they must be pre-assigned in cluster.conf. This command will not change any node IDs that are already set in clus- ter.conf, it will simply add unique node ID numbers to nodes that do not already have them. SEE ALSO
cluster.conf(5) ccs_tool(8)
All times are GMT -4. The time now is 10:51 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy