01-31-2020
Have you considered using SR-IOV or virtual tagged interfaces ?
Are you running virtual machines on that or bare metal configuration ?
Regards
Peasant.
10 More Discussions You Might Find Interesting
1. News, Links, Events and Announcements
. (0 Replies)
Discussion started by: Driver
0 Replies
2. Linux
This is actually my second attempt at installing Linuix on my mechine. Instead of wiping my HD (big mistake), I thought I would give the LiveCD option a try.
I was able to boot Virtual Linux under option 2 (the "happy" mode that doesn't freeze while loading) normally... until I got to the login... (2 Replies)
Discussion started by: invot
2 Replies
3. Red Hat
Hi everyone,
Can you please tell me the procedure to configure Virtual ip (CARP) mechanism into the Redhat Linux?
Thanks in advanced.
Regards,
Jagdish Machhi (1 Reply)
Discussion started by: jagdish.machhi@
1 Replies
4. High Performance Computing
Hi Guys,
I'm busy building a LVS-NAT cluster on Red-Hat server 5.1 and I need a kernel that has LVS capabilities for a red-hat server 5.1. Is the anyone who can advise me where I can get this kernel. I have already visited the following site Ultra Monkey: and this has old kernels e.g. 2.4.20... (2 Replies)
Discussion started by: Linux Duke
2 Replies
5. Solaris
Hello All,
I have a requirement to add multiple virtual interfaces on a non-global zone (Solaris 10). The global zone is a 2 node Veritas Cluster Server. So, my question is do we have to make any modifications to the cluster config (which I think should not be the case)? Can anyone help with me... (11 Replies)
Discussion started by: mahive
11 Replies
6. Linux
My setup consists of a hardware node, which hosts several virtual machines (OpenVZ, to be precise). The hardware node has two network interfaces (<ifA>, <ifB>) connected to different subnets (<networkA>, <networkB>). I want to route the traffic of certain VEs over <ifB> while routing the other VEs... (0 Replies)
Discussion started by: bakunin
0 Replies
7. UNIX for Dummies Questions & Answers
After installing PV (Para virtual drivers) drivers I m not able to check the network speed of my Ethernet port.
Please check the output of mii-tool and ethtool.
# mii-tool eth0
SIOCGMIIPHY on 'eth0' failed: Operation not supported
# ethtool eth0
Settings for eth0:
Link... (2 Replies)
Discussion started by: pinga123
2 Replies
8. UNIX for Advanced & Expert Users
Hello,
I am trying to understand the VIRT field that shows in the TOP command output. I have a users application that appears to be leaking memory. I see that the field VIRT in the top output is showing 55.8g.
The question is where is that getting stored? The disk does not appear to have... (7 Replies)
Discussion started by: jaysunn
7 Replies
9. Solaris
Hi Al,
In course of understanding networking in Solaris, I have these doubts on Interfaces. Please clarify me. I have done fair research in this site and others but could not be clarified.
1. In the "ifconfig -a" command, I see many interfaces and their configurations. But I see many... (1 Reply)
Discussion started by: satish51392111
1 Replies
10. UNIX for Dummies Questions & Answers
So after getting a Nagios plugin up and running that checks certain things including network interfaces, I get an error off the one box I built (as opposed to all of the others that were built by a former employee). The error complains of the "NIC logical group" failing.
All the boxes are HP... (7 Replies)
Discussion started by: xdawg
7 Replies
LEARN ABOUT DEBIAN
fence_xvmd
fence_xvmd(8) System Manager's Manual fence_xvmd(8)
NAME
fence_xvmd - Libvirt-based, general purpose fencing host for virtual machines.
SYNOPSIS
fence_xvmd [OPTION]...
DESCRIPTION
fence_xvmd is an I/O Fencing host which resides on bare metal machines and is used in conjunction with the fence_xvm fencing agent.
Together, these two programs can be used to fence can be used machines which are part of a cluster.
If the virtual machines are backed by clustered storage or the virtual machines may be migrated to other physical machines, all physical
machines in question must also be a part of their own CMAN/OpenAIS based cluster. Furthermore, the bare metal cluster is required to have
fencing configured if virtual machine recovery is expected to be automatic.
fence_xvmd accepts options on the command line and from cluster.conf
OPTIONS
-f Foreground mode (do not fork)
-d Enable debugging output. The more times you specify this parameter, the more debugging output you will receive.
-i family
IP family to use (auto, ipv4, or ipv6; default = auto)
-a address
Multicast address to listen on (default=225.0.0.12 for ipv4, ff02::3:1 for ipv6)
-p port
Port to use (default=1229)
-I interface
Network interface to listen on, e.g. eth0.
-C auth
Authentication type (none, sha1, sha256, sha512; default=sha256). This controls the authentication mechanism used to authenticate
clients. The three SHA hashes use a key which must be shared between both the virtual machines and the host machine or cluster.
The three SHA authentication mechanisms use a simple bidirectional challenge-response based on pseudo- random number generation and
a shared private key.
-c hash
Packet hash type (none, sha1, sha256, sha512; default=sha256). This controls the hashing mechanism used to authenticate fencing
requests. The three SHA hashes use a key which must be shared between both the virtual machines and the host machine or cluster.
-k key_file
Use the specified key file for packet hashing / SHA authentication. When both the hash type and the authentication type are set to
"none", this parameter is ignored.
-u Fence by UUID instead of virtual machine name.
-? Print out a help message describing available options, then exit.
-h Print out a help message describing available options, then exit.
-X Do not connect to CCS for configuration; only use command line parameters. CCS configuration parameters override command line
parameters (because they are cluster-wide), so if you need to override a configuration option contained in CCS, you must specify
this parameter.
-L Local-only / non-cluster mode. When used with -X, this this option prevents fence_xvmd from operating as a clustered service, obvi-
ating the need to configure/run CMAN on the host domain.
-U uri Force use of the specified URI for connecting to the hypervisor.
-V Print out a version message, then exit.
CCS PARAMETERS
CCS options are simply attributes of the <fence_xvmd> tag, a child of the <cluster> tag in /etc/cluster/cluster.conf.
debug="1"
Same as the -d option. Specify numbers >1 for more debugging information.
family="param"
Same as the -i option.
multicast_address="param"
Same as the -a option.
port="param"
Same as the -p option.
auth="param"
Same as the -C option.
hash="param"
Same as the -c option.
key_file="param"
Same as the -k option.
use_uuid="1"
Same as the -u option.
SEE ALSO
fence(8), fence_node(8), fence_xvm(8)
fence_xvmd(8)