It seems like your browser didn't download the required fonts. Please revise your security settings and try again.
Barracuda CloudGen Firewall

Best Practice - Performance Tuning on VMware Hypervisors

  • Last updated on

VMware offers several virtualization technologies to improve the performance of virtual networks and hosts. You can increase the throughput of your virtual Barracuda CloudGen Firewall by improving the virtual machine's performance and optimizing the virtual and physical network infrastructure surrounding your Barracuda CloudGen Firewall Vx.

You can tune the following settings:

Manual MAC Addresses

Use manually assigned MAC addresses for your virtual network adapters. The license of the Barracuda CloudGen Firewall Vx is bound to the MAC address of the first network interface. If you do not reconfigure the virtual network adapter to use a manually assigned MAC address before the first start, the VMware hypervisor automatically assigns a MAC address. These automatically generated MAC addresses can change when you move your configuration file to another directory on the same host or move the VM to another host machine, rendering your license invalid. The MAC address can also change if you edit or remove the following options from the VM's vmx configuration file (where n is the network interface number):


VMware reserves a range of MAC addresses for automatic assignment. The hypervisor refuses to start any VM using a network adapter with a manual MAC out of the reserved range. If an automatic MAC address has already been assigned and you have not licensed the Barracuda CloudGen Firewall Vx yet, remove the NIC from the VM's configuration and add a new Ethernet adapter. Make sure that you use the same virtual network adapter (e1000, VMXNET2, or VMXNET3).

Virtual Network Adapters and DirectPath I/O

The performance of the virtual networking infrastructure is generally dependent on CPU resource availability. You can significantly improve network performance by increasing the available CPU resources.

If CPU resources are not the limiting factor, network performance can benefit by using VMware's DirectPath I/O. DirectPath I/O uses the Intel VT-d and AMD-Vi CPU extensions to grant the guest operating system direct access to the host's hardware components. If you use DirectPath I/O, networking performance generally benefits by increasing the available RAM, rather than CPU resources. The following virtualization features are not available when using DirectPath I/O: physical NIC sharing, memory overcommit, vMotion, and Network I/O Control.

TCP Offload Engine (TOE)

TCP offload engines cannot be used with routing devices such as the Barracuda CloudGen Firewall.

Channel Bonding (Interface Bundles, Etherchannel)

Channel bonding is useful for scenarios in which networking hardware is the limiting factor. Generally, all connections that interact between the VMware hypervisor and the physical network can benefit by channel bonding. This includes the connection from the vswitch to the physical network.

Do not use channel bonding with virtualized network interfaces because this increases the management overhead by the hypervisor and decreases the overall network performance.

Disk I/O

You can increase the performance of your virtual disk by:

  • Using eager zeroed thick provisioning when creating the virtual disk. If you are already using thin provisioned disks, you can convert them to thick provisioning on the VMware host.
  • Increasing the timeout for the disks in the Barracuda CloudGen Firewall Vx to at least 180 sec.

If you have changed your MAC address and invalidated your license, contact Barracuda Networks Technical Support.