

What Are the Features of a VMFS File System?
source link: https://www.nakivo.com/blog/all-you-need-to-know-about-vmware-vmfs/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

NAKIVO Blog > VMware Administration and Backup > All You Need to Know About VMware VMFS
All You Need to Know About VMware VMFS
VMware is one of the leaders in the virtualization software market. VMware vSphere is the main virtualization platform for data centers and provides a wide range of enterprise features to run virtual machines (VMs). To provide reliable, effective storage that is compatible with VMware vSphere features, VMware has created its own file system called VMFS. This blog post explains VMware VMFS features, how these features work with other vSphere features, and the advantages of VMFS to store VM files and run VMs.
Table of contents:
What is VMFS
Virtual Machine File System (VMFS) is a cluster file system that is optimized to store virtual machine files, including virtual disks in VMware vSphere, for the most effective storage virtualization. VMFS is a high-performance reliable proprietary file system that is designed to run virtual machines (VMs) in a scalable environment - from small to large and extra-large datacenters. VMware vSphere VMFS functions as a volume manager and allows you to store VM files in logical containers called VMFS datastores.
The VMFS file system can be created on SCSI-based disks (directly attached SCSI and SAS disks) and on block storage accessed via iSCSI, Fibre Channel (FC), and Fibre Channel over Ethernet (FCoE). VMFS operates on disks attached to ESXi servers but not on computers running VMware Workstation or VMware Player.
VMFS Versions
VMware VMFS has evolved significantly since the release of the first version. Here is a short overview of VMFS versions to track the main changes and features.
VMFS 1 was used for ESX Server 1.x. This version of VMware VMFS didn’t support clustering features and was used only on one server at a time. Concurrent access by multiple servers is not supported.
VMFS 2 was used on ESX Server 2.x and sometimes on ESX 3.x. VMFS 2 didn’t have a structure of directories.
VMFS 3 was used on ESXi Server 3.x and ESXi Server 4.x in vSphere. Support for directory structure was added in this version. The maximum file system size is 50 TB. The maximum logical unit number (LUN) size is 2 TB. ESXi 7.0 doesn’t support VMFS 3.
VMFS 5 is used starting from VMware vSphere 5.x. The volume (file system) size was increased to 64 TB, and the maximum VMDK file size is increased to 62 for VMFS5. However, ESXi 5.5 supports the maximum size of VMDK virtual disks of 2 TB. Support of the GPT partition layout was added. Both GPT and MBR are supported (previous VMFS versions support only MBR).
VMFS 6 was released in vSphere 6.5 and is used in vSphere 6.7, vSphere 7.0, and newer versions such as vSphere 7.0 Update 3.
VMFS 5 vs VMFS 6
Let’s compare the most recent versions of VMware VMFS – VMFS 5 and VMFS 6. These two VMFS versions are widely used in organizations with VMware vSphere environments. VMFS 6 was redesigned significantly to meet the most modern requirements of virtualization. Before we look at the comparison table, let me explain some acronyms and terms used in the table.
A Logical Unit Number (LUN) is used to identify a logical unit on a SCSI-based disk by using an addressing scheme: Bus > Address (ID) > LUN. LUN is limited storage space on a disk or disk array, presented as a block storage device accessed with SCSI. LUNs are logical devices created on the storage system side and allow you to identify multiple devices on a single address. A LUN can occupy the entire SCSI-based storage array or a physical disk drive, and a disk drive can contain multiple LUNs. Partitions and VMFS datastores (VMFS volumes) are created on a LUN to store files. Creating more than one VMFS datastore on a single LUN is not recommended and cannot be done in the web interface of VMware vSphere Client and VMware Host Client. The term LUN is often used interchangeably with the terms disk or drive.
512n. Traditionally, hard disk drives (HDD) supported 512 byte-sized physical sectors. When physical and logical sectors are aligned, no additional manipulations are required. The 512-byte sector size is referred to as the legacy sector size.
512e. Later storage vendors introduced the Advanced Format for the disks that they produce, and they increased the sector size to 4 kilobytes. Increasing the sector size allows vendors to use less space to store service information for each sector in the appropriate sections (Gap, Error Correction Code (ECC), Sync, Address Mark) and, as a result, improve efficiency for large disk drives (4 TB and more). The fact that the geometric size of sectors on magnetic plates is reduced and the need to preserve the efficiency of error correction are taken into account.
To preserve compatibility with existing hardware and software (including an operating system), 512-byte sectors are emulated by physical 4-KB sectors on hard disk drives and solid-state drives (SSD). The disadvantage is that older operating systems don’t support sector alignment for disks with Advanced Format.
4Kn. 4K-native disk drives don’t emulate 512-byte sectors. The size of both physical and logical sectors is 4096 bytes. Hardware (including storage controllers, like a RAID controller) and software (an operating system or hypervisor, device drivers, and a file system) working with 4Kn disk drives must support native 4K sectors. This rule is also true for VMware ESXi and VMFS. Starting from v6.7, VMware vSphere supports 4Kn disk drives. The advantage of using 4Kn disk drives is that there is no overhead to emulate 512-Byte sectors, and, as a result, performance is slightly increased.
Master Boot Record (MBR) is a partition table format used for disk drives that are no larger than 2.2 TB. MBR supports up to four primary partitions on a disk.
GUID Partition Table (GPT) is a new partition table format that supports creating partitions larger than 2 TB and allows you to create more than four primary partitions.
Raw Device Mapping (RDM) is a feature that allows you to attach a physical storage device or LUN to a VM directly.
Features VMFS 5 VMFS 6 Access for ESXi 6.0 and 5.x Yes No Access for ESXi 6.5 and later Yes Yes 512e storage devices Yes, but not supported on local disks Yes, by default 512n storage devices Yes Yes, by default 4Kn storage devices No Yes Datastores per ESXi host 512 512 MBR partitioning scheme Yes No GPT partitioning scheme Yes Yes Manual space reclamation in ESXCLI Yes Yes Automatic space reclamation No Yes Space reclamation from a guest OS Limited Yes Snapshot mechanisms SEsparse for virtual disks larger than 2 TBVMFSsparce for virtual disks smaller than 2 TB
SEsparse Block size 1 MB 1 MB Virtual disk emulation type 512n 512n RDM Yes (Max 62 TB) Yes (Max 62 TB)A detailed explanation of the features used in VMFS 5 and VMFS 6 is provided below.
VMFS Features
VMware VMFS is optimized to store big files because VMDK virtual disks typically consume a large amount of storage space. A VMFS datastore is a logical container using the VMFS file system to store files on a block-based storage device or LUN. A datastore runs on top of a volume. A VMFS volume can be created by using one or multiple extents. Extents rely on the underlying partitions.
VMware VMFS block size
VMFS 5 and VMFS 6 use a 1-MB block size. The block size has an impact on the maximum file size and defines how much space the file occupies. You cannot change the block size for VMFS 5 and VMFS 6.
VMware utilizes sub-block allocation for small directories and files with VMFS 6 and VMFS 5. Sub-blocks help save storage space when files smaller than 1 MB are stored so that there is no need to occupy the entire 1-MB block. The size of a sub-block is 64 KB for VMFS 6 and 8 KB for VMFS 5.
VMFS 6 initiates a new concept for using small file blocks and large file blocks. Don’t confuse small file blocks with the default 1-MB blocks. The size of small file blocks (SFB) in VMFS 6 is 1 MB. VMFS 6 can also use large file blocks (LFB), which are 512 MB in size, to improve performance when creating large files. LFBs are primarily used to create thick provisioned disks and swap files. The portions of a provisioned disk that don’t fill to LFBs, are located on SFBs. SFBs are used for thin provisioned disks.
File fragmentation
Fragmentation is when the blocks of one file are scattered across the volume, and there are gaps between them. The gaps can be empty or occupied by blocks that belong to other files. Fragmented files slow read and write disk performance. Restoring performance requires defragmentation, which is the process of reorganizing pieces of data stored on a disk to locate them together (put the blocks used by a file continuously one after another). This allows the heads of an HDD to read and write the blocks without extra movements.
VMware VMFS is not prone to significant file fragmentation. Fragmentation is not relevant to the performance of VMFS because large blocks are used. The VMware VMFS block size is 1 MB, as mentioned above. For example, Windows uses 4-KB blocks for the NTFS file system, which should be defragmented periodically when located on hard disk drives. Most of the files stored on a VMFS volume, though, are large files – virtual disk files, swap files, installation image files. If there is a gap between files, the gap is also large, and when a hard disk drive seeks multiple blocks used to store a file, this impact is negligible. In fact, a VMFS volume cannot be defragmented and there is no need for that.
Don’t run defragmentation in a guest operating system (OS) for disks used by the guest OS. Defragmentation from a guest OS doesn’t help because storage performance for a VM depends on the input/output (I/O) intensity on the physical storage array where multiple VMs (including virtual disks that are VMDK files) are stored and can utilize this storage array with different I/O loads. Moreover, if you start to defragment partitions located on thin provisioned disks from a guest OS, blocks are moved around, storage I/O load increases, and the size of these thin disks increases. Defragmentation for linked clone VMs and VMs that have snapshots causes an increase of redo logs, which occupy more storage space as a result. If you back up VMware VMs with a solution that relies on Changed Block Tracking, defragmentation increases the number of changed blocks as well, and backup time is increased because more data needs to be backed up. Defragmentation from a guest OS has a negative impact when running Storage vMotion to move a VM between datastores.
Datastore extents
A VMFS volume resides on one or more extents. Each extent occupies a partition, and the partition in turn is located on the underlying LUN. Extents provide additional scalability for VMFS volumes. When you create a VMFS volume, you use at least one extent. You can add more extents to an existing VMFS volume to expand the volume. Extents are different from RAID 0 striping.
If you detect that one of the attached extents has gone offline, you can identify which extent of a volume is offline. Just enter the following code:
vmkfstools -Ph /vmfs/volumes/iscsi_datastore/
The result displays the SCSI identifier (NAA id) of the problematic LUN.
If one of the extents fails, the VMFS volume can continue to stay online. But if a virtual disk of a VM has at least one block on the failed extent, the VM virtual disk becomes inaccessible.
If the first extent used by a VMFS volume goes offline, the entire VMFS datastore becomes inactive because address resolution resources are located on the first extent. Therefore, use VMFS extents to create and increase VMFS volumes if there is no other solution to increase a volume.
Regularly back up VMware vSphere backup to protect VM data and avoid possible issues caused by VMFS volumes with multiple extents storing VM files.
Journal logging
VMFS uses an on-disk distributed journal to update metadata on a file system. After creating a VMFS file system, VMware VMFS allocates storage space to store journal data. Journaling is used to track changes that have not been committed to the file system yet.
Journaling changes written to file system metadata makes you more likely to recover the latest version of a file in case of an unexpected shutdown or crash. Journal helps replay changes made since the last successful commit to reconstruct VMFS file system data. A journaling file system doesn’t require running the full file system check after a failure to check data consistency as you can check the journal. There are .sf files in the root of a VMFS volume to store VMFS file system metadata. Each ESXi host connected to the VMFS datastore can access this metadata to know the status of each object on the datastore.
VMFS metadata contains file system descriptors: block size, volume capacity, number of extents, volume label, VMFS version, and VMFS UUID. VMFS metadata can be helpful for VMFS recovery.
Directory structure
When a VM is created, all VM files, including VMDK virtual disk files, are located in a single directory on a datastore. The directory name is identical to the VM name. If you need to store a particular VMDK file in another location (for example, on another VMFS datastore), you can copy a VMDK file manually and open the virtual disk in VM settings to attach the disk. A structured architecture simplifies backup and disaster recovery because the content of a directory should be copied for VM backup to enable recovery if you lose data on the original VM.
Thin provisioning
Thin provisioning is a VMFS feature that optimizes storage utilization and helps save storage space. You can set thin provisioning at the virtual disk level (for a particular virtual disk of a VM). The size of a thin provisioning virtual disk grows dynamically when data is written to the thin provisioned virtual disk. Using as much storage space as the disk needs at any moment in time is the advantage of thin disks. For example, you create a thin provisioned virtual disk whose size is 50 GB, but only 10 GB of storage space is used on this virtual disk. The size of a virtual disk file (*-flat.vmdk) is 10 GB in this case. The guest OS detects that the maximum size of the disk is 50 GB and displays the used space as 10 GB.
You can ensure that thin provisioning relies on the VMFS file system if you try to copy a thin provisioned virtual disk (.vmdk and -flat.vmdk virtual disk files) to your local disk formatted with NTFS or ext4 file system. After copying the virtual disk, the virtual disk size is equal to the maximum provisioned disk size (not the actual size of the thin provisioned disk on the VMFS datastore).
Note: VMware vSphere also supports creating datastores, including shared datastores on the NFS file system with support for thin provisioning.
Free space reclamation
Automatic space reclamation (automatic SCSI UNMAP) from VMFS 6 and guest operating systems allows storage arrays to reclaim unmapped or deleted disk blocks from a VMFS datastore. In VMware vSphere 6.0 and VMFS 5, space reclamation was done manually with the esxcli storage vmfs unmap command.
Space reclamation allows you to fix the issue of when a file is deleted in the file system, but the underlying storage doesn’t know that the file was deleted, and the appropriate physical storage space (blocks on a disk) must be freed up. This feature is especially useful for thin provisioned disks. When a guest OS deletes files inside a thin virtual disk, the amount of used space on this disk is reduced and the file system doesn’t use the corresponding blocks anymore. In this case, the file system tells the storage array that these blocks are now free, the storage array deallocates the selected blocks, and these blocks can be used to write data.
Let’s have a closer look at how data is deleted in storage when using virtualization and virtual machines. Imagine that there is a VM that has a guest OS using a virtual disk with a file system such as NTFS, ext4, or another file system. The thin provisioned virtual disk is stored on a datastore that has a VMFS file system. The VMFS file system is using the underlying partition and LUN located on a storage array.
- A file is deleted in the guest OS that operates with a file system (NTFS, for instance) on a virtual disk.
- The guest OS initiates UNMAP.
- The virtual disk on the VMFS datastore shrinks (the size of the virtual disk is reduced).
- ESXi initiates UNMAP to the physical storage array.
UNMAP is issued by ESXi with an attached VMFS datastore when a file is deleted or moved from the VMFS datastore (VMDK files, snapshot files, swap files, ISO images, etc.), when a partition is shrunk from a guest OS, and when a file size inside a virtual disk is reduced.
Automatic UNMAP for VMware VMFS 6 starting from ESXi 6.5 is asynchronous. Free space reclamation doesn’t happen immediately, but the space is eventually reclaimed without user interaction. Asynchronous UNMAP has some advantages:
- Avoiding the instant overloading of a hardware storage array because UNMAP requests are sent at a constant rate.
- Regions that must be freed up are batched and unmapped together.
- There is no negative impact on input/output performance and other operations.
How did UNMAP work in previous ESXi versions?
- ESXi 5.0 – UNMAP is automatic and synchronous
- ESXi 5.0 Update 1 – UNMAP is performed with vmkfstools in the command line interface (CLI)
- ESXi 5.5 and ESXi 6.0 – Manual UNMAP was improved in is run in ESXCLI
- ESXi 6.0 – EnableBlockDelete allows VMFS to issue UNMAP automatically if VMDK virtual disk files are shrunk from in-guest UNMAP.
Snapshots and sparse virtual disks
You can make VM snapshots in VMware vSphere to save the current VM state and the state of virtual disks. When you create a VM snapshot, a virtual disk snapshot file is created on the VMFS datastore (a -delta.vmdk file). The snapshot file is called a delta-disk or child disk, which represents the difference between the current state and the previous state when you took the snapshot. On the VMFS datastore, the delta disk is the sparse disk that uses the copy-on-write mechanism to save storage space when writing new data after creating a snapshot. There are two types of sparse format depending on the configuration of the underlying VMFS datastore: VMFSsparse and SEsparse.
VMFSsparse is used for VMFS 5 and virtual disks smaller than 2 TB. This snapshot technique works on top of VMFS as the redo log is empty at the moment of starting and grows when the data is written after a snapshot is taken.
SEsparse is used for virtual disks larger than 2 TB for VMFS 5 and for all virtual disks on VMFS 6. This format is based on the VMFSsparse format but has a set of enhancements such as support for space reclamation, which allows an ESXi hypervisor to UNMAP unused blocks after deleting data by a guest OS or deleting a snapshot file.
Note: In ESXi 6.7 with VMFS 6, UNMAP for SEsparse disks (snapshot disks for thin provisioned disks) is started automatically since there is 2 GB of dead space (data is deleted but not reclaimed) on the VMFS file system. If you delete multiple files from the guest OS, for example, four 512-MB files, then the asynchronous UNMAP is started. You can see live UNMAP update statistics in esxtop by pressing v to enable VM view then pressing f to select the field order, and pressing L to display UNMAP stats. The default value is 2 GB, though you can change it in the CLI. In ESXi 7.0 U3 the maximum granularity reported by VMFS is 2 GB.
RAW Device Mapping
The integration of Raw Device Mapping (RDM) disks to the VMware VMFS structure provides you more flexibility when working with storage for VMs. There are two RDM compatibility modes in VMware vSphere.
RDM disks in virtual compatibility mode. A VMDK mapping file is created on a VMFS datastore (*-rdmp.vmdk) to map a physical LUN on the storage array to a virtual machine. There are some features of mapping physical storage to a VM with this method.
Primary storage management operations such as Open and other SCSI commands are passed through a virtualization layer of an ESXi hypervisor, but Read and Write commands are processed directly to the storage device and bypass the virtualization layer.
This means that a VM can work with the mapped RDM SCSI disk only as with a storage device, but most vSphere features, such as snapshots, are available.
RDM disks in physical compatibility mode. An ESXi host creates a mapping file on a VMFS datastore, but SCSI commands are processed to a LUN device directly, thus bypassing the hypervisor’s virtualization layer (except the LUN Report command). This is a less virtualized disk type. VMware snapshots are not supported.
Clustering features
Clustering and concurrent access to the files on a datastore is another great feature of VMware VMFS. Unlike conventional file systems, VMware VMFS allows multiple servers to read and write data to files at any given time. A locking mechanism allows multiple ESXi hosts to access VM files concurrently without any data corruption. A lock is added to each VMDK file to prevent writing data to the opened VMDK file by two VMs or by two ESXi hosts simultaneously. VMware supports two file locking mechanisms in VMFS for shared storage.
Atomic test and set (ATS) only is used for storage devices that support T10 standard vStorage API for Array Integration (VAAI) specifications. This locking mechanism is also called hardware-assisted locking. The algorithm uses discrete locking per disk sector. By default, all new datastores formatted with VMFS 5 and VMFS 6 use ATS only if the underlying storage supports this locking mechanism and doesn’t use SCSI reservations. ATS is used for datastores created by using multiple extents and vCenter filters out non-ATS storage devices.
ATS + SCSI reservations. If ATS fails, SCSI reservations are used. Unlike ATS, SCSI reservations lock the entire storage device when there is a need for metadata protection for the appropriate operation that modifies the metadata. After this operation is finished, VMFS releases the reservation to make it possible for other operations to continue. Datastores that were upgraded from VMFS 3 continue to use the ATS+SCSI mechanism.
VMware VMFS 6 supports sharing a VM virtual disk file (VMDK) with up to 32 ESXi hosts in vSphere.
Support for vMotion and Storage vMotion
VMware vMotion is a feature used for the live migration of VMs between ESXi hosts (CPU, RAM, and network components of VMs are migrated) without interrupting their operation. Storage vMotion is a feature to migrate VM files, including virtual disks, from one datastore to another without downtime even if the VM is in the running state. The VMFS file system is one of the main elements enabling live migration to work because more than one ESXi host reads/writes data from/to the files of the VM that is being migrated.
Support for HA and DRS
Distributed Resource Scheduler (DRS), High Availability (HA), and Fault Tolerance work on the basis of the file locking mechanism of VMFS, live migration, and clustering features. The automatic restart of a failed VM on another ESXi host when you enable HA is performed, and VM live migration is initiated to balance a cluster when you use DRS. You can use HA and DRS together.
Support for Storage DRS. There is support for using VMFS 5 and VMFS 6 in the same datastore cluster to migrate VM files between datastores. Use homogeneous storage devices for VMware vSphere Storage DRS.
Increasing VMFS volumes
You can increase the size of a VMFS datastore while VMs are running and use VM files located on that datastore. The first method is to increase the size of LUN used by your existing datastore. Increasing LUN occurs in the storage system (not in vSphere). Then you can extend a partition and increase the VMFS volume.
You can also increase the VMFS volume by aggregating multiple disks or LUNs together. VMFS extents are added to increase a VMFS volume in this case. Extended datastores that use multiple disks are also called spanned datastores. Homogenous storage devices must be used. For example, if the first storage device used by a datastore is 512n, then newly added storage devices must be 512n-block devices. This feature can help bypass the maximum LUN limit when the maximum supported datastore size is higher than the maximum LUN size.
Example. There is a 2-TB limit for a LUN, and you need to create a VM with a 3-TB virtual disk on a single VM datastore. Using two extents, each at 2 TB, allows you to resolve this issue. You must use the GPT partitioning scheme to create a partition and datastore larger than 2 TB.
Decreasing VMFS volumes
Reducing a VMFS volume is not supported. If you want to reduce the VMFS volume size, you need to migrate all files from the VMFS volume you want to reduce to a different VMFS datastore. Then you need to delete the datastore you want to reduce and create a new VMFS volume with a smaller size. When a new smaller datastore is ready on the created volume, migrate VM files to this new datastore.
VMFS Datastore upgrade
You can upgrade VMFS 3 to VMFS 5 directly without migrating VM files and recreating a new VMFS 5 datastore. There is support for the VMFS 3 to VMFS 5 upgrade on the fly, when VMs are running without the need to power off or migrate VMs. After upgrading, VMFS 5 retains all the characteristics of VMFS 3 that were used before. For example, the block size remains 64 KB instead of 1 MB, and MBR is preserved for partitions not larger than 2 TB.
However, Upgrading VMFS 5 and older versions of VMFS datastores to VMFS 6 directly is not supported. You need to migrate files from the datastore (that you are going to upgrade) to a safe location, delete the VMFS 5 datastore, create a new VMFS 6 datastore, and then copy the files back to the new VMFS 6 datastore.
If you upgrade ESXi to ESXi 6.5 or later, you can continue to use VMFS 3 and VMFS 5 datastores created before the ESXi upgrade. You cannot create VMFS 3 datastores on ESXi 6.5 and later ESXi versions.
How to Mount VMFS in Linux
If a hardware failure occurs, you may need to mount a VMware VMFS file system on a Linux machine to copy VM data for disaster recovery if you don’t have the ability to mount disks with VMFS datastores to another ESXi server. Examples of hardware failures are a broken motherboard on an ESXi server or a damaged storage controller such as a RAID controller. If you use separate SCSI or SAS (Serial Attached SCSI) disks or RAID 1 as directly attached storage, you can attach disks to another machine that has a SAS controller installed without additional manipulations. If you use RAID 10, RAID 0, or other array types, you need to use an identical RAID controller and install drivers on a Linux machine to detect the RAID volume with the attached disks.
Note: RAID 1 and RAID 10 are the most reliable RAID options but RAID 1 is the easiest to recover. Using RAID 5 and RAID 6 has many disadvantages and low reliability. Using non-raid disks in production environments is not recommended.
In my example, I have an ESXi host with three datastores, each located on a separate disk for demonstration purposes.
Datastore000 is empty. The disk on which this datastore is located is a system disk that contains ESXi system partitions. ESXi is installed on this disk.
Datastore10a is located on a VMFS 6 volume and contains a Windows VM.
Datastore11 is located on a VMFS 5 volume and contains a copy of the Windows VM that is called Win-VM.
Ubuntu 20.04.3 is a Linux machine on which I am going to mount VMware VMFS file systems. I attach two disks on which datastore11 and datastore 10a are located to a Linux machine. Linux distributions don’t include the driver required to work with VMFS. For this reason, you need to install vmfs-tools, which is a free package and, after that, VMFS can be mounted in read-only mode.
How to mount VMFS 5 in Ubuntu
Run commands as root. Use sudo -i to get the root privileges that are required to install VMFS tools.
Install vmfs-tools from Ubuntu package repositories:
apt-get install vmfs-tools
The installed version of vmfs-tools is 0.2.5-1build1 in my case.
Create a directory that will be used as a mount point:
mkdir /mnt/vmfs
Check the names of disks and partitions with VMFS:
fdisk -l
My disk with the VMFS 5 partition is /dev/sdb and the needed partition is /dev/sdb1
The VMFS 6 partition is /dev/sdc1
As you can see on the screenshot, the partition type is VMware VMFS. The unique disk identifier is displayed.
You can use parted to view GPT partitions that are bigger than 2 TB:
parted -l
Let’s mount our VMFS 5 partition to the /mnt/vmfs/ directory:
vmfs-fuse /dev/sdb1 /mnt/vmfs
How to mount VMFS 6 in Ubuntu
Create a directory to be used as a mount point to mount VMFS 6 in Linux:
mkdir /mnt/vmfs6
If you try to mount the VMFS 6 file system in Linux with vmfs-fuse, you get an error because vmfs-fuse supports VMFS 3 and VMFS 5 but doesn’t support VMFS 6. In this case, the following message appears:
VMFS: Unsupported version 6
Unable to open filesystem
You need to install VMFS6-tools which contains vmfs6-fuse that is used to mount VMFS 6 in Linux. You can find VMFS6-tools on a website with deb packages: https://packages.debian.org/sid/vmfs6-tools
Download the current version of VMFS6-tools:
wget http://http.us.debian.org/debian/pool/main/v/vmfs6-tools/vmfs6-tools_0.1.0-3_amd64.deb
Install the downloaded deb package:
dpkg -i vmfs6-tools_0.1.0-3_amd64.deb
Be aware that libc6 >= 2.28 is required to install VMFS6-tools. If you use Ubuntu 18, you may encounter errors during installation for this reason.
Now you can mount VMFS in Ubuntu 20 to /mnt/vmfs6 with the command:
vmfs6-fuse /dev/sdc1 /mnt/vmfs6
The VMFS 6 file system has been successfully mounted in Ubuntu 20 in the read-only mode. Now you can copy VM files to the needed location. You can temporarily run copied VMs on Linux with VMware Workstation installed or on a Windows machine with VMware Workstation or Hyper-V until your ESXi server hardware is repaired or a new server is delivered (if you don’t have another ESXi host to run the VMs).
Read more about cross-platform recovery and backup export.
Remember that when you copy thin provisioned disks from a VMFS file system to ext4, NTFS, or other conventional file systems, the virtual disk files take up as much space as if they were thick provisioned. So, be ready to prepare enough disk space.
How to mount VMFS with multiple extents in Linux
Let’s look at a more complex example of mounting VMFS in Linux when a VMFS volume consists of two extents. We have two disks of the same size that are combined into a single VMFS volume (datastore12).
Check the names of partitions:
fdisk -l
parted -l
My two VMFS extents are located on /dev/sdd1 and /dev/sde1 partitions.
When mounting a VMFS file system that consists of multiple extents, use vmfs6-fuse with the command of the following format:
vmfs6-fuse exent_1 extent_2 extent_n mount_point
In my case the command is:
vmfs6-fuse /dev/sdd1 /dev/sde1 /mnt/vmfs6
As you can see on the screenshot below, the VMFS 6 file system that consists of multiple extents has been successfully mounted in Ubuntu.
Manual VM recovery by copying files from a VMFS file system mounted to a healthy computer after a failure of an ESXi host can be time-consuming. You can protect data in a more efficient way if you use a professional backup solution that supports VM backup on the host level, thin provisioned disks, instant VM recovery, and granular recovery. NAKIVO Backup & Replication supports all these features and can help you protect your data reliably. Learn the best practices for disaster recovery in virtualized environments. Restoring VM data from a backup can be more effective than manual VMFS recovery.
Conclusion
This blog post covered the VMFS file system and explained the features of this cluster file system. VMware VMFS is a reliable, scalable, and optimized file system to store VM files. VMFS supports concurrent access by multiple ESXi hosts, thin provisioning, Raw Device Mapping, VM live migration, journaling, physical disks with Advanced Format including 512e and 4Kn, the GPT partitioning scheme, VM snapshots, free space reclamation, and other useful features. Due to the 1-MB block size, the latest VMFS versions are not prone to performance degradation caused by file fragmentation. Storing virtual machine files on VMFS datastores is the recommended way to store VMs in VMware vSphere.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK