This post covers Section 3, Configure and Administer Advanced vSphere Storage, Objective 3.4, Perform Advanced VMFS and NFS Configurations and Upgrades.
The vSphere Knowledge covered in this objective:
- Identify VMFS and NFS Datastore properties
- Identify VMFS5 capabilities
- Create/Rename/Delete/Unmount a VMFS Datastore
- Mount/Unmount an NFS Datastore
- Extend/Expand VMFS Datastores
- Place a VMFS Datastore in Maintenance Mode
- Identify available Raw Device Mapping (RDM) solutions
- Select the Preferred Path for a VMFS Datastore
- Enable/Disable vStorage API for Array Integration (VAAI)
- Disable a path to a VMFS Datastore
- Determine use case for multiple VMFS/NFS Datastores
Objective 3.4 VMware Resources:
- vSphere Installation and Setup Guide
- vSphere Storage Guide
- Multipathing Configuration for Software iSCSI Using Port Binding
- VMware vSphere® Storage APIs – Array Integration (VAAI)
- vSphere Resource Management Guide
- vSphere Client / vSphere Web Client
– Identify VMFS and NFS Datastore properties
A datastore is a logical container which hides specifics of physical storage from virtual machines and provides a uniform model for storing virtual machine files.
Understanding VMFS Datastores in the vSphere Storage Guide on page 146.
VMFS is a special high-performance file system format optimized for storing virtual machines.
Datastores backed by block storage devices are formatted with the VMFS file system.
Understanding Network File System Datastores in the vSphere Storage Guide on page 152.
NFS Clients for NFS v3 and NFS v4.1 is built in to ESXi.
NFS can be used as a datastore to store ISO images, virtual machines and templates.
A NFS datastore can be mounted as read only to be used as a repository for ISO images or virtual machines templates.
NFS supports vMotion and Storage vMotion, HA and DRS, Fault Tolerance (FT not supported on NFS 4.1 datastores), virtual machine snapshots, and large capacity virtual disks.
Virtual disks created on NFS Datastores are thin-provisioned by default unless hardware acceleration which supports Reserve Space operation is used.
NFS v4.1 does not support hardware acceleration.
– Identify VMFS5 capabilities
Understanding VMFS Datastores in the vSphere Storage Guide on page 146.
Supports storage devices greater than 2 TB.
Supports virtual machine disks greater than 2 TB.
Standard 1 MB file system block size.
Ability to reclaim physical storage space on thin provisioned storage devices.
Online upgrade process allows VMFS datastores to be upraded without disrupting hosts or virtual machines.
New VMFS datastores are created with the GPT format.
An upgraded VMFS datastore will continue to use the MBR format until it is expanded beyond 2TB. Once expanded beyond 2TB the MGS format is converted to GPT.
Supports up to 256 VMFS datastores per host
Upgrading VMFS Datastores in the vSphere Storage Guide on page 164.
Before upgrading a datastore to VMFS5 ensure all hosts support VMFS5 (running ESXi 5.0 or newer).
VMFS5 upgrade is a one way process. Once upgraded to VMFS5 the datastore cannot reverted back to the previous VMFS format.
VMFS3 to VMFS5 datastore upgrade can be performed while the datastore is in use.
– Create/Rename/Delete/Unmount a VMFS Datastore
Create a VMFS Datastore in the vSphere Storage Guide on page 160.
Change Datastore Name in the vSphere Storage Guide on page 167.
Rename a VMFS Datastore using the Web Client -> Storage -> Datastore
Right click the datastore and select Rename.
Unmount Datastores in the vSphere Storage Guide on page 168.
Unmount VMFS Datastore using the Web Client -> Storage -> Datastore
Right click the datastore and select Unmount Datastore.
Before unmounting the VMFS datastore:
- No virtual machines in the hosts’ inventory should reside on the datastore
- The datastore should not be used for HA heartbeats
- The datastore is not managed by Storage DRS
- Storage IO control is disabled on the datastore
Delete a VMFS Datastore using the Web Client -> Storage -> Datastore
Right click the datastore and select Delete Datastore.
Remove VMFS Datastores in the vSphere Storage Guide on page 169.
When you delete a datastore, it is destroyed and disappears from all hosts that have access to the datastore.
The VMFS datastore can be deleted while mounted. It is preferable to unmount the VMFS datastore from all hosts prior to deleting it.
Before removing a VMFS Datastore:
- Remove or migrate all virtual machines from the datastore
- Ensure no other hosts are accessing the datastore
- Disable Storage DRS for the datastore
- Disable Storage IO control on the datastore
- Make sure the datastore is not used for HA heartbeating
– Mount/Unmount an NFS Datastore
Create a NFS Datastore in the vSphere Storage Guide on page 161.
New NFS Datastore
Select the NFS Version – NFS 3 or NFS 4.1 – Use only one NFS version to access a given datastore.
Provide the Datastore Name, Folder, and Server (IP or FQDN).
NFS datastores can be mounted as read-only.
When an NFS datastore has been unmounted from all hosts it disappears from inventory.
Use the New Datastore wizard to mount an NFS datastore which has been removed from inventory.
– Extend/Expand VMFS Datastores
Increasing VMFS Datastore Capacity in the vSphere Storage Guide on page 165.
The following methods are used to increase the capacity of a VMFS datastore:
- Grow any expandable datastore extent so that it fills the available adjacent capacity.
- Add a new extent. A datastore can span over up to 32 extents and appear as a single volume.
If the device has available adjacent capacity the expandable column will show Yes and the extent will be expanded. Otherwise an available device will be added as a new extent.
The datastore can then be extended into the available space.
– Place a VMFS Datastore in Maintenance Mode
Using Storage DRS Maintenance Mode in the vSphere Resource Management Guide on page 93.
Maintenance mode is available to datastores within a Storage DRS-enabled datastore cluster.
Standalone datastores cannot be placed in maintenance mode.
– Identify available Raw Device Mapping (RDM) solutions
Raw Device Mapping in the vSphere Storage Guide on page 203.
Raw device mapping (RDM) provides a mechanism for a virtual machine to have direct access to a LUN on the physical storage subsystem.
An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device.
RDM is used in the following situations:
- When SAN snapshot or other layered applications run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN.
- In any MSCS clustering scenario that spans physical hosts — virtual-to-virtual clusters as well as physical-to-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as virtual disks on a shared VMFS.
Physical Compatibility Mode
In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized so that the VMkernel can isolate the LUN to the owning virtual machine. All physical characteristics of the underlying hardware are exposed.
Allows the guest operating system to access the hardware directly.
A virtual machine with a physical compatibility RDM cannot be cloned, made into a template, or migrated if the migration involves copying the disk.
Virtual Compatibility Mode
In virtual mode, the VMkernel sends only READ and WRITE to the mapped device. The mapped device appears to the guest operating system exactly the same as a virtual disk file in a VMFS volume. The real hardware characteristics are hidden.
Allows the RDM to behave as if it were a virtual disk, so you can use such features as taking snapshots, cloning, and so on.
When an RDM disk in virtual compatibility mode is cloned or a template is created out of it, the contents of the LUN are copied into a .vmdk virtual disk file.
– Select the Preferred Path for a VMFS Datastore
Change the Path Selection Policy in the vSphere Storage Guide on page 191.
When using Fixed path policy the preferred path is marked with an asterisk (*) in the preferred column.
When using the Fixed path policy a designated preferred path can be selected or the first working path if a preferred path has not be set. Fixed is the default policy for most active-active storage devices.
– Enable/Disable vStorage API for Array Integration (VAAI)
Storage Hardware Acceleration in the vSphere Storage Guide on page 243.
Offloads virtual machine and storage management operations to the storage hardware.
- Storage vMotion migrations
- Deploying VMs from Templates
- Cloning VMs
- VMFS locking and metadata operations
- Provisioning thick virtual disks
- Creating FT VMs
To enable or disable hardware acceleration in the strong>Web Client -> Host and Clusters -> Host -> Manage -> Settings -> Advanced System Settings
Change the following options to 0 (disabled) or 1 (enabled):
Determine hardware acceleration support for block devices using esxcli storage core device vaai status get
Determine hardware acceleration support for NAS devices using esxcli storage nfs list
– Disable a path to a VMFS Datastore
Disable Storage Paths in the vSphere Storage Guide on page 192.
Paths can be temporarily disabled fro maintenance. Paths can be disabled in the Web Client from the datastore, storage device, or adapter view.
– Determine use case for multiple VMFS/NFS Datastores
ESXi and SAN Use Cases in the vSphere Storage Guide on page 28.
Making LUN Decisions in the vSphere Storage Guide on page 29.
NFS Storage Guidelines and Requirements in the vSphere Storage Guide on page 153.
Use cases for multiple VMFS/NFS datastores include:
- Performance – Datastores provisioned based on performance characteristics (RAID level, type of disks) of the LUNs/Disks to support the virtual machines using the datastores.
- Data Protection – Datastores protected by array features – replication, snapshots, etc
- Recovery/Fault Domains – How many VMs will be impacted by a datastore outage? How long will be required to restore?
- Availability – Separation of virtual machines and virtual machine disks to reduce the impact of a failed datastore.
- Load Balancing – Balancing performance/capacity across multiple datastores
More Section Objectives in the VCP6-DCV Delta Exam Study Guide Index
I hope you found this helpful. Feel free to add anything associated with this section using the comments below. Happy studying.