vSphere 6.0: Understanding Virtual Storage (Part-1)

Storage is one of the important features of configuring and managing your virtual environment. Storage options give the flexibility to set up your storage based on your cost, performance, and manageability requirements. Shared storage is useful for disaster recovery, high availability, and moving VMs from one host on another.

In this post, we’ll discuss some basic concepts of shared storage and storage protocols, datastores, file systems i-e VMFS, NFS, and vSAN, VVols.

Storage Overview

Datastores are the shared storage which are used to store VMs that are hosted on different ESXi hosts. ESXi hosts should be configured in a way that they can access shared storage.

Several storage technologies can be used with ESXi hosts to manage vSphere environment.

  • Direct Attached Storage: Storage disks which are directly connected to the ESXi host via a direct connection either internally or externally.
  • Fiber Channel (FC): is a high-speed transport protocol used for connecting SANs.
  • Fiber Channel over Ethernet (FCoE): FC traffic is encapsulated into FCoE frames. FCoE also reduces the network ports and cabling requirements.
  • iSCSI: is a SCSI transport protocol used to enable storage devices to transmit their data on TCP/IP network.
  • Network Attached Storage (NAS): in this storage technology, storage is shared at file system level over standard TCP/IP networks.

                                                        Figure: Thanks to VMware


A shared storage amongst ESXi host is called datastore. A Datastore is a logical storage unit that can use a disk space on physical device or span several physical devices. Datastore is used to store VMs, templates, and ISO images.

vSphere supports following types of datastores:

  • VMFS
  • NFS
  • Virtual SAN (vSAN)
  • Virtual Volumes (VVol)

A VM is stored as a set of files in its own directory in a datastore. Datastores can also be used to store ISO images, VM templates.

            Figure: Thanks to VMware


Virtual Machine File System version 5 is a file system that store VMs files and allows concurrent access to shared storage. It enables following unique services:

  • Migration of VMs from one host to another without down time
  • A failed VM will be automatically restarted on a different host.
  • Clustering of VMs across several physical servers

VMFS datastore size can be increased dramatically upto 62TB when residing VMs are powered on and run. It uses 1 MB block size for storing large virtual disk files as well as 8 KB sub-block size for storing small files.

               Figure: Thanks to VMware


Network File System (NFS) is file sharing protocol that ESXi hosts use to communicate with a NAS device. NAS is a specialized storage device that connects to a network and can provide file access services to ESXi hosts.

NFS datastores are treated like VMFS datastores because they can be used to store VMs files, template, and ISO images. vMotion is also supported by NFS if VMs files are stores on NFS. NFS provides client in ESXi host communicates with the NFS server using NFS version 3.

              Figure: Thanks to VMware


Virtual SAN is a hypervisor-converged, software-defined storage platform that is fully integrated with VMware vSphere. Virtual SAN aggregates locally attached disks of hosts that are members of a vSphere cluster (3 – 64 hosts per cluster), to create a distributed shared storage solution. Virtual SAN enables the rapid provisioning of storage within VMware vCenter as part of virtual machine creation and deployment operations.

Virtual SAN is the first policy-driven storage product designed for vSphere environments that simplifies and streamlines storage provisioning and management. Using VM-level storage policies, Virtual SAN automatically and dynamically matches requirements with underlying storage resources. With Virtual SAN, many manual storage tasks are automated – delivering a more efficient and cost-effective operational model.

Virtual SAN 6.0 provides two different configuration options, a hybrid configuration that leverages both flash-based devices and magnetic disks, and an all-flash configuration. The hybrid configuration uses server-based flash devices to provide a cache layer for optimal performance while using magnetic disks to provide capacity and persistent data storage. This delivers enterprise performance and a resilient storage platform. The all-flash configuration uses flash for both the caching layer and capacity layer.

VMwae recently released the latest version of vSAN 6.6 with new features and enhancements.

                                              Figure: Thanks to VMware


VMware Virtual Volumes (VVols) are encapsulation of VMs and virtual disks and stored natively inside a storage system that is connected via SAN or Ethernet. VVols are created automatically when virtual machine management operations are performed.

VVols provide:

  • Reduced storage management overhead
  • Greater scalability
  • Lower cost of storage
  • Better response to data access
  • Better analytical requirements

You can follow detailed overview here written by Mohammed Raffic

                       Figue: Thanks to VMware

Raw Device Mapping

Raw Device Mapping (RDM) is a file stored in a VMFS volume that acts as a proxy for a raw physical device. RDM enables you to store virtual machine data directly on a LUN. RDM is recommended when a VM must interact with a real disk on the SAN. This condition exists when you make disk array snapshots or have a large amount of data that you don’t want to move on to a virtual disk as part of a physical-to-virtual conversions.

Raw Device Mapping

                                     Figure: Thanks to VMware

I’ve tried to cover most of the concepts to understand the virtual storage. I hope you enjoyed reading this post, if you have any query feel free to comment below. Thanks for reading, and share it to social media if you feel worth sharing it. Be friendly and sociable.

Leave a Reply