DCAStoragevHerseyVMware

vSphere Software iSCSI Basics

Shared storage provides the foundation of a vSphere deployment and is required when leveraging many of the advanced features available in vSphere including vSphere High Availability (HA) and the vSphere Distributed Resource Scheduler (DRS). Using the software iSCSI initiator in ESXi is one of the supported options available to provide shared storage connectivity to a vSphere environment. If iSCSI connectivity is not configured correctly there can be a significant impact on both performance and availability.

When doing discovery and assessments on customer environments I frequently run across issues with the configuration of the iSCSI connectivity to ESXi hosts. Common iSCSI configuration issues I encounter are:

  • Improper configuration of the VMkernel interfaces used for iSCSI traffic, such as, using multiple active vmnics or having standby vmnics configured for the VMkernel.
  • VMkernel interfaces are not bound to the Software iSCSI adapter. This is either due to the interface not being in compliance due to the issue above or the fact this part of the configurations was never completed.
  • Single points of failure in the iSCSI network with the most common single point of failure being storage paths are not separated across physical switches.

In this post I am going to take a look at two common network configurations used when deploying iSCSI connectivity in a vSphere environment along with how to correctly configure vSphere networking and the software iSCSI adapter when using iSCSI storage.

The requirements and constraints are going to determine the network configuration used and there are several supported configurations but for this post I want to keep things fairly simple and focus on the two configurations I commonly see.

Regardless of the physical network configuration the same rules will apply when configuring vSphere for iSCSI connectivity using the software iSCSI adapter:

  • Physically or logically separate iSCSI traffic from other network traffic.
  • A VMkernel is required for iSCSI traffic.
  • VMkernels used for iSCSI traffic must be configured with a single active vmnic. Do not use standby vmnics on iSCSI VMkernels.
  • Bind VMkernel interfaces used for iSCSI traffic to the software iSCSI adapter.
  • Minimize single points of failure for iSCSI connectivity.

The recommended configuration for iSCSI is to physically separate iSCSI networks. This includes physically separating iSCSI connectivity across multiple NICs and separating iSCSI storage paths across switches dedicated to carrying only iSCSI traffic. This configuration provides the best performance, availability, and security for iSCSI storage connectivity.

physical-separate-iSCSI

Another common configuration is to provide multiple paths for connectivity leveraging existing network switches. Paths are physically separated across multiple NICs and network switches, then logically separated from other network traffic by using separate subnets and VLANs to carry iSCSI storage traffic. This is a common configuration I see in, and use for, smaller deployments where iSCSI connectivity is configured to use the same physical switch infrastructure used for management, vMotion, and production virtual machine network traffic.

logically-separate-iSCSI

A VMkernel is required for iSCSI traffic. When the VMkernel is created an IP address and subnet is configured. If the vSwitch the VMkernel is created on has more than one vmnic associated with it then the Teaming and Failover must be configured so the VMkernel has a single vmnic set as an active adapter. There should be no standby adapters and all other vmnics available on the vSwitch should be configured as unused adapters for the VMkernel.

iSCSI1-vmkernel

If the VMkernel interface has multiple vmnics set as active adapters or vmnics set as standby adapters the VMkernel will not be available, the VMkernel configuration will not be compliant, when configuring the network port binding for the software iSCSI adapter.

Once the VMkernel interfaces have been created and configured correctly the Software iSCSI adapter is enabled and configured. Configuring the software iSCSI adapter includes binding the VMkernel interfaces to the software iSCSI adapter and adding the dynamic (SendTargets) targets to the adapter.

The software iSCSI network binding configuration was commonly over-looked in earlier versions of vSphere due to the fact the binding had to be done from the CLI. In current versions the binding configuration can be accomplished using the vSphere Clients.

The following image displays how the software iSCSI adapter with the compliant VMkernel inferfaces bound:
iSCSI-NetworkPortBinding

These are basics of configuring connectivity when using software iSCSI in a vSphere environment. For more details, including step-by-step configuration instructions, check out the vSphere 5.5 Storage Guide. The iSCSI Design Considerations and Deployment Guide technical whitepaper is another great resource.

If you are looking to dig in even deeper into storage check out the book Storage Implementation in vSphere 5.0 by @MostafaVMW. The book is based on vSphere 5.0 but the concepts (and most of the configurations) can be applied to current versions of vSphere. Definitely worth reading.

Hopefully you found the information in this post helpful. Questions, constructive criticisms, and thoughts are always welcome in the comments below or you can hit me up @herseyc.

Leave a Reply

Your email address will not be published. Required fields are marked *

four × four =