03-15-2014
That is corrent, all ports in aggregation using LACP need to be on the same switch at least a logical one but this is the one for your network team to consult with.
You can also create IPMP between two aggregate interfaces using active passive mode configuration.
Aggregation not only for bandwidth, you can use it if you have multiple nodes in cluster, then one node on one switch, second node on second switch, sharing configuration about ldoms or having clusters between ldoms for high availability.
Hope that helps
Regards
Peasant.
This User Gave Thanks to Peasant For This Post:
6 More Discussions You Might Find Interesting
1. Linux
Hi Oracle Linux users,
You can probably guess from the title what the question is:
Does anyone know if Oracle Linux (the Unbreakable variety I think that is) comes in a
SPARC release or, if not, will there be one some time soon ?
Many thanks,
P;):D:b: (2 Replies)
Discussion started by: patcom
2 Replies
2. Solaris
Hi,
I am getting following service error on one of the sparc servers running solaris 10 -
Code :
$ svcs -a | grep "maintenance"
maintenance Nov_08 svc:/application/management/sma:default
$ svcs -xv
svc:/application/management/sma:default (net-snmp SNMP daemon)
State: maintenance... (8 Replies)
Discussion started by: sunadmin
8 Replies
3. Solaris
Hi
I would like to know whether we can configure link based IPMP in private connectivity in Oracle RAC
Regarsd
---------- Post updated at 04:35 PM ---------- Previous update was at 04:27 PM ----------
Here I am taking about in case of private connectivity through cross cable (6 Replies)
Discussion started by: sb200
6 Replies
4. Solaris
Hi experts,
This is a production server.
Host information's are below
SunOS hostname_srv 5.10 Generic_150400-09 sun4v sparc sun4v
Now issue with ntp service, This host have zone in it with 9 hosts, Every hosts have ntp service issue. While i check for the service status it's in... (3 Replies)
Discussion started by: babinlonston
3 Replies
5. Solaris
Hi all,
My internal SAS disks (4 of them) are all sitting on the same controller/pcie device which is now being own by my default primary aka io aka control domain.
I have created a vdisk server, that serves slices on these 4 disks to guest domain and everything is working fine.
The issue... (2 Replies)
Discussion started by: javanoob
2 Replies
6. Solaris
Looking at latest recommendations - http://www.oracle.com/technetwork/server-storage/vm/ovmsparc-best-practices-2334546.pdf - specifically regarding domain roles.
At the moment, we just have a physical host, primary control domain and then guest ldoms. We then export things like vdisks,vnet etc... (2 Replies)
Discussion started by: psychocandy
2 Replies
LEARN ABOUT HPUX
cmrunnode
cmrunnode(1m) cmrunnode(1m)
NAME
cmrunnode - run a node in a high availability cluster
SYNOPSIS
cmrunnode [-v] [node_name...] [-t | -w none]
DESCRIPTION
cmrunnode causes a node to start its cluster daemon to join the existing cluster. This command verifies the network configuration before
causing the node to start its cluster daemon.
To start a cluster on one of its nodes, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the clus-
ter configuration file. See access policy in cmquerycl(1m).
Starting a node will not cause any active packages to be moved to the new node. However, if a package is DOWN, has its switching enabled,
and is able to run on the new node, that package will automatically run there.
If node_name is not specified, the cluster daemon will be started on the local node and will join the existing cluster.
Options
cmrunnode supports the following options:
-v Verbose output will be displayed.
-t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages. The -w
option is not required with the -t option as -t does not validate network connectivity, but assumes that all the nodes can meet
any external dependencies such as EMS resources, package subnets, and storage.
node_name...
Start the cluster daemon on the specified node(s).
-w none
By default network probing is performed to check that the network connectivity is the same as when the cluster was configured.
Any anomalies are reported before the cluster daemons are started. The -w none option disables this probing. The option should
only be used if this network configuration is known to be correct from a recent check.
RETURN VALUE
cmrunnode returns the following value:
0 Successful completion.
1 Command failed.
EXAMPLES
Run the cluster daemon on the current node:
cmrunnode
Run the cluster daemons on node1 and node2:
cmrunnode node1 node2
AUTHOR
cmrunnode was developed by HP.
SEE ALSO
cmquerycl(1m), cmhaltcl(1m), cmhaltnode(1m), cmruncl(1m), cmviewcl(1m), cmeval(1m).
Requires Optional Serviceguard Software cmrunnode(1m)