Sponsored Content
Special Forums UNIX and Linux Applications Is Virtualisation Right for Colo? Post 302204411 by mark54g on Wednesday 11th of June 2008 01:05:55 PM
Old 06-11-2008
You could in theory use Xen or also look at VMware Server and see if that is for you. Xen will likely give you less overhead, but possibly worse performance of guests depending on the type of work as well as if you are able to use the paravirtualization drivers.

VMware server is free now. You may also want to look at Qemu, but VMware will likely solve your problem for you. Figure on losing about <9% of your overall horsepower for the flexibility.
 

4 More Discussions You Might Find Interesting

1. UNIX and Linux Applications

Virtualisation of linux/solaris

Hello, I need to build a server with both solaris and linux guest VM using full virt on x64. The guest os are solaris 10 (not opensolaris) and linux RedHat (Centos) and Suse. I really cant make run a solaris 10 domU on Linux (hang after grub loader ..... ) Dom0 neither on opensolaris 11 xvm... (0 Replies)
Discussion started by: manifesto
0 Replies

2. Solaris

virtualisation in solaris

Is there any other virtualisation made in solaris 10 other than zones and LDOMS? :confused: (3 Replies)
Discussion started by: priky
3 Replies

3. Solaris

problem with virtualisation

Hi, I am using compaq t5207tu model with 60gb harddisk and 2.5gb RAM.Its dual boot-Windows XP and Solaris. I tried installing SUN xVM on solaris to make client -server architecture (32 bit processor and 32 bit OS ) -guest OS (solaris 10)on host OS(Solaris 10). But whenever i try to install... (14 Replies)
Discussion started by: shruti_gupta
14 Replies

4. UNIX for Dummies Questions & Answers

Server with OpenVZ virtualisation is not responding but VMs are OK

Server is accessible only via IPMI. SSH and web control panel is timeout. Takes several hours. Server dont have high load or suspicious processes. I checked /etc/hosts.deny and restarted ssh, but nothing :( (0 Replies)
Discussion started by: postcd
0 Replies
VMX(4)							   BSD Kernel Interfaces Manual 						    VMX(4)

NAME
vmx -- VMware VMXNET3 Virtual Interface Controller device SYNOPSIS
To compile this driver into the kernel, place the following line in your kernel configuration file: device vmx Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5): if_vmx_load="YES" DESCRIPTION
The vmx driver provides support for the VMXNET3 virtual NIC available in virtual machines by VMware. It appears as a simple Ethernet device but is actually a virtual network interface to the underlying host operating system. This driver supports the VMXNET3 driver protocol, as an alternative to the emulated pcn(4), em(4) interfaces also available in the VMware environment. The vmx driver is optimized for the virtual machine, it can provide advanced capabilities depending on the underlying host operating system and the physical network interface controller of the host. The vmx driver supports features like multiqueue support, IPv6 checksum offloading, MSI/MSI-X support and hardware VLAN tagging in VMware's VLAN Guest Tagging (VGT) mode. The vmx driver supports VMXNET3 VMware virtual NICs provided by the virtual machine hardware version 7 or newer, as provided by the following products: o VMware ESX/ESXi 4.0 and newer o VMware Server 2.0 and newer o VMware Workstation 6.5 and newer o VMware Fusion 2.0 and newer For more information on configuring this device, see ifconfig(8). MULTIPLE QUEUES
The vmx driver supports multiple transmit and receive queues. Multiple queues are only supported by certain VMware products, such as ESXi. The number of queues allocated depends on the presence of MSI-X, the number of configured CPUs, and the tunables listed below. FreeBSD does not enable MSI-X support on VMware by default. The hw.pci.honor_msi_blacklist tunable must be disabled to enable MSI-X support. LOADER TUNABLES
Tunables can be set at the loader(8) prompt before booting the kernel or stored in loader.conf(5). hw.vmx.txnqueue hw.vmx.X.txnqueue Maximum number of transmit queues allocated by default by the driver. The default value is 8. The maximum supported by the VMXNET3 virtual NIC is 8. hw.vmx.rxnqueue hw.vmx.X.rxnqueue Maximum number of receive queues allocated by default by the driver. The default value is 8. The maximum supported by the VMXNET3 virtual NIC is 16. hw.vmx.txndesc hw.vmx.X.txndesc Number of transmit descriptors allocated by the driver. The default value is 512. The value must be a multiple of 32, and the maxi- mum is 4096. hw.vmx.rxndesc hw.vmx.X.rxndesc Number of receive descriptors per ring allocated by the driver. The default value is 256. The value must be a multiple of 32, and the maximum is 2048. There are two rings so the actual usage is doubled. EXAMPLES
The following entry must be added to the VMware configuration file to provide the vmx device: ethernet0.virtualDev = "vmxnet3" SEE ALSO
altq(4), arp(4), em(4), netintro(4), ng_ether(4), pcn(4), vlan(4), ifconfig(8) AUTHORS
The vmx driver was ported from OpenBSD and significantly rewritten by Bryan Venteicher <bryanv@freebsd.org>. The OpenBSD driver was written by Tsubai Masanari. BSD
March 17, 2014 BSD
All times are GMT -4. The time now is 06:06 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy