Intro
Okay, you see the title, and I know what you’re thinking: ‘This guy’s about to defend a technology that belongs in a museum next to a dial-up modem.’ And you’re not wrong to be skeptical. For years, the industry has treated Raw Device Mappings (RDMs) like a fossil from the Jurassic period, a relic best left buried. But a funny thing happened on the way to the future. With the recent bombshell that vVols are being deprecated in VMware Cloud Foundation 9.0, the architectural landscape just got a major shake-up. As many of us prepare for a return trip to the land of VMFS, it’s the perfect time to dust off the RDM playbook and ask: in this new world, are they really that bad?
Let’s first attack the history of RDMs and why they were created in the first place. The existence of RDMs revolved around three major use cases. We’ll tackle the first two that are no longer relevant before we spend more time on the tricky subject that still makes them relevant today.
Reason 1: The 2TB Capacity Limit
One use case for RDMs was for capacity-bound workloads. There was a time in the vSphere 5.0 days that a VMDK could only reach 2TB in size. So if you had a large Windows File Server or were hosting an NFS share within a VM, you needed a way to expand your capacity if it exceeded 2TB. This was addressed in vSphere 5.5, which increased the VMDK maximum to its current 62TB limit. Because of this, using RDMs for capacity no longer makes sense, as the current maximum is more than enough for the vast majority of virtual machines.
Reason 2: The Performance Myth
There was a time, way back in the vSphere 4.0 days, that VMFS needed major tweaks for VMs that required significant storage I/O. This perception that RDMs are faster still exists today, but it’s based on ancient history. Numerous reports and tests have proven that modern and properly configured VMFS versions (like VMFS 5 & 6) perform on par with RDMs. Some VMware performance nerds will even tell you that VMFS can exceed RDM performance due to hypervisor-level features like Intelligent I/O Path Management and caching. Because of this, the “performance” debate for RDMs is no longer relevant and is not a core reason why you would use them today.
Reason 3: The Sticky Situation with Clustered Databases
So that brings us to the third and most important reason RDMs are still around, and why they will likely stay in play for the time being. The third use case is for applications that cluster using a shared disk, which mainly applies to databases in the modern era.
There are databases that cluster using a shared nothing approach, like Microsoft Always-On Availability groups or Cassandra. In these setups, data is replicated between node instances, but most importantly, they are independent copies.
Then there are databases that use a clustered disk, like Microsoft SQL on Failover Cluster Instances (FCIs) or Oracle RAC. In these configurations, a disk has to be provisioned with a cluster file system to manage locks between nodes, ensuring data consistency and availability. Historically, the only way to support these clustered databases was to use an RDM. That is, until the Shared VMDK became a feature.
Shared Virtual Disks (Shared VMDK’s)
The modern Shared VMDK feature was rolled out iteratively over the last 7 years. It first appeared in VMware Cloud on AWS and for vVols in vSphere 6.7 , then made its way to vSAN in version 6.7 Update 3, and finally opened to the broader market for VMFS in vSphere 7.0.
This feature provided a native way for multiple virtual machines to access the same virtual disk file simultaneously. Operationally, this meant database cluster nodes could finally share a drive in a virtual environment without the operational headaches of an RDM. While the initial release in vSphere 7.x had some significant limitations, vSphere 8.x delivered key improvements, such as increasing cluster density and adding support for modern storage protocols beyond just Fibre Channel.
This brings us to the most important question: Does the Shared VMDK feature make RDMs a legacy technology?
This is the question that I’ve been chewing on. The reality is, as IT Operators, especailly VMware operators, we would prefer RDM’s to go away. They introduce major hurdles in backup and portability that make them snowflakes we need to take care of in the datacenter. So if we were to review the functionality of Shared VMDK’s, do they solve for this problem? I felt the only way to make this evaluation was a simple chart, let’s give it a try:
Feature | Raw Device Mapping | Shared VMDK on VMFS | Winner |
Performance (I/O & Latency) | A debate that I… | Don’t really want to type out | Research and draw your own conclusions.. |
VMware Snapshot Support* | No | No | N/A limitation on both |
Live Storage vMotion Support | No, but SAN tools could keep this a limited outage | No, Cluster must be fully shut down to move | Slight RDM advantage |
Ease of Management | Simpler to the DBA, just interfaces with the VM directly and the SAN | Simpler for the VM admin who lives in vCenter | Where do responsibilities lie? |
Automation | Simpler to the DBA, uses DB platform and SAN API’s depending on the need | Depending on the automation, may need to add additional code to work with vSphere | Sorta same as above, where are the lines of ownership |
Disk Extension | Can often be performed online by extending volume and guest OS rescan/extend** | Requires full cluster shutdown** | RDM |
Backup* | Volumes need to be backed up outside of VADP | Same, Shared VMDK’s are recommended to be set at Independent – persistent disks* | N/A – Same outcome |
Scale | Limited by the maximum number of LUNs per host (512 in vSphere 6.5+). 1 Each RDM consumes one LUN, which could quickly exhaust host limits and hinder consolidation | More scalable as multiple VMDKs can reside on a single LUN. Specific limit of 192 clustered VMDKs per host in vSphere 8.0. vSphere 8 also increased supported WSFC clusters per host from 3 to 16. | Shared VMDK |
NVMe-oF Support | No | Yes | Shared VMDK |
* Independent – Persistent disk operations: https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/vsphere-virtual-machine-administration-guide-7-0/managing-virtual-machinesvm-admin/using-snapshots-to-manage-virtual-machinesvm-admin/take-snapshots-of-a-virtual-machinevm-admin/change-disk-mode-to-exclude-virtual-disks-from-snapshotsvm-admin.html
** There is a carve out for shared vmdk’s on vVols (but vvols are being deprecated) https://knowledge.broadcom.com/external/article/342629/extension-of-a-clustered-shared-vmdk.html https://knowledge.broadcom.com/external/article/313472/microsoft-windows-server-failover-cluste.html
So, after all that, what’s the final verdict? I have to admit, I was surprised. There’s a lot more gray area here than the “VMDK-or-bust” crowd would have you believe. I initially went into this research thinking that RDM’s have no use anymore, but certain limitations made me realize that we are just not there yet. For me, the one of the biggest limitations is the disk extension on Shared VMDKs. The thought of scheduling a full cluster outage just to add a bit more space is a massive operational headache, and until that’s addressed in a future version, it remains a potential deal-breaker. There are deeper portability nuances between how and when you can move a shared VMDK, that in combination with good SAN tools could make portability with RDM’s kept to minimal downtime. Still neither are a live migration. Also, working with a number of people that do extensive automation for cloning volumes for a dev or business intelligence process, I can understand the simplicity of coding Python or PowerShell when an RDM is in play.
Of course, this isn’t a total knockout for Shared VMDKs. If you need NVMe-oF for your database volumes or require huge scale, there are clearly advantages. At the end of the day, my primary point is that reports of the RDM’s death have been greatly exaggerated. It still has a pulse and a purpose in today’s datacenters for very specific, niche scenarios.
But let’s be real, in the fast-moving world of IT, this advice will likely age like milk. See something you disagree with? Find me on LinkedIn and let’s talk, I’d love to hear your thoughts.