Wednesday 30 October 2013

VMFS vs. RDM


I was searching which cluster file system works best when you create any cluster server on virtual machine 
so i decided and search so me thing to select the best.: Virtual Machine File System (VMFS) or Raw Device Mapping (RDM)

Raw Device Mapping 

With RDM, the VMkernel doesn't format the LUN; instead, the VM guest OS formats the LUN. Each RDM is a single VM hard disk and is usually attached to a single VM. An RDM takes the place of a VMDK file for a VM. This is where the VM's disk contents are stored. But this is not where the files that make up the VM are stored: These files need to be stored on a data store separate from the RDM.

RDMs are sometimes deployed based on the belief that they offer better performance since there is less file system overhead than VMDK files on VMFS. But, in some uses, RDMs are a little slower than VMFS. If a VM needs top disk performance, then dedicate a data store to the VMDK file.

The biggest limitation with RDMs is that the one LUN is only one VM disk. With a data store, the LUN could hold 20 VM disks in VMDK files. RDM can be very limiting, since an ESXi server can only handle 255 LUNs and the whole DRS and HA cluster should see the same LUNs.

Virtual Machine File System(VMFS)

A VMFS data store is the default way for the VMkernel to handle disks; the disk is partitioned and formatted by the VMkernel and nothing but the VMkernel can read the disk, now called a data store. The advantage of VMFS is that a single disk -- logical unit number(LUN) in storage-area network (SAN) terms -- can hold multiple virtual machines.

How many virtual machines (VMs) to assign per LUN is an age-old debate, but an average number would be a dozen VMs sharing one data store. Essentially, a data store can hold multiple VMs and can hold all of the files that make up each VM. These files include the VMX file that lists the VM hardware configuration, the VMDK files that are the VM's hard disks and the other sundry files that make up the VM.

How to choose between VMFS and RDM

There are a few things that require RDMs in vSphere:

1.      Microsoft Failover Cluster Services. MSCS uses shared disks to build a cluster out of VMs on different ESXi hosts. The shared disks cannot be VMDK files; RDMs are required if your storage is Fibre Channel. Check VMware's guidance on MSCS in VMs since it can be tricky to configure. Also, be sure you really need to use MSCS when vSphere HA isn't enough.

2.      Storage-area network Quality of Service. For the SAN fabric to apply QoS to traffic from one VM -- not the ESXi server -- the VM must use a unique Fibre Channel ID using a feature called N_Port Identity Virtualization (NPIV). NPIV only applies when the VM disk is an RDM.

3.      Managing some Fibre Channel storage from a VM. Some storage arrays are controlled using LUNs over the Fibre Channel network. To run the configuration software inside a VM, these control LUNs must be presented to the VM as RDMs. (This is not common; I've seen it only on high-end EMC storage.)

4.      Big VM disks. The largest VMDK file you can create is 2TB, but a single RDM can be up to 64TB. You need to decide if a VM with a huge disk is a good choice when you factor the backup size and how long it would take to do a restore.
     
Using all RDMs means there is only room for 254 RDM VM disks, plus one data store for the VM files. With VMFS data stores, the 255 LUNs could hold thousands of VM disks.

The option to use an RDM may be necessary in some situations, but your default choice when possible should be to use VMFS and store VM disks in VMDK files.


 Thanks. for Reading.  

3 comments:

  1. Performance is relative. I think Administrators need to look at the bigger picture.
    For example, in the course of conducting snapshot and clones in your virtual datacenter, what happens to the performance of my virtual environment?
    From what VMware tells me , I can get about the same performance if I use an RDM or VMDK. However did you know, if I start using Snapshots 1/O drops to about 50 percent with just 5 snapshots. This is really bad. This has to do with how snapshots are architected. But this should be of no surprise, just run IO Meter against your cluster while taking snapshots and you will see your performance tank like the titanic.
    Actually there is a good video demo of this degradation. You can skip if you like to 23:30 to see the actual demo.
    https://www.youtube.com/watch?v=V6sLBTIiHv4&feature=youtu.be
    Rather than buy a physical SAN or NAS, you can just install Maxta MxSP as virtual appliance. It aggregates the storage from each of your virtual hosts and presents it as a single NFS Data store eliminating the need for a physical SAN or NAS. The technology out performs VSAN technology by over 50 percent. It is a hundred light years ahead of VSAN with respect features and performance, plus it will support Hyper-V and KVM in the near future.
    Maxta is well worth the look, they are backed by Intel.
    You friendly storage adviser,

    Virtual Ray

    ReplyDelete
  2. Loved your efforts on it buddy. Thanks for sharing this with us
    Vmware training
    Vmware certification

    ReplyDelete