Working with a customer the other day to migrate his direct connect SimpliVity deployment to a 10 GbE switch topology to facilitate adding a couple of compute nodes to expand the CPU and memory resources available to the cluster. He went through the migration but could not get things to work correctly. The issue ended up being the MTU configuration on the physical switch.
Mis-configuration of the physical switch is probably the most common issue I come across when configuring virtual networking to use Jumbo Frames. The configuration varies from switch to switch – vendor to vendor. On some switches the configuration is done globally, on other switches it is configured on individual ports, and on some switches it must be enabled globally and configured per port.
Jumbo Frames must be configured correctly end-to-end. This means on the vSwitch, vmkernel interfaces, physical switches, endpoint – everything must be configured to support the larger frames end-to-end. Using larger frames will provide more efficient processing of network traffic (in many cases) but it does require a bit more complexity. If any part of the path is mis-configured, network frames larger than 1500 may not pass. If they do pass they will be fragmented, which defeats the purpose of using Jumbo Frames.
To configure Jumbo Frames on vSphere the MTU is set to 9000 on the vSwitch.
On the ESXi Host the vmkping command can be used to test a vmkernel interface. This is useful for testing proper configuration of Jumbo Frames.
The following options are used to test Jumbo Frames configuration using vmkping:
-I Specifies the vmkernel interface to use
-d Sets the do not fragment (DF) bit on IPv4 packets – this ensures the packet will not be fragmented
-s Sets the size of the packet – to test Jumbo Frames use 8192
The full command to test vmk2 for correct Jumbo Frames configuration:
vmkping -I vmk2 -d -s 8192 IP.ADD.RES.SS
Jumbo Frames can provide network efficiency and increase network performance but it must be configured correctly. Hope these tips help.