Migrating from vVols to VMFS/NFS: Your Post-Deprecation Roadmap

Intro

It’s not common that tech companies “un-invent” the future, but that’s sure what this feels like. The tech world just got a major curveball, and we all need to decide how to swing.

The big question now is: What does this mean for customers, and what are the options? After dozens of customer conversations and a lot of thought on the issue, I wanted to break down the choices I see and how you can best approach the vVols deprecation news from Broadcom.

Let’s start with the facts. Broadcom has officially announced the deprecation of vVols, beginning with VCF and VVF 9.0 (you can see the details in their official KB article). While the KB states vVols will be “fully discontinued in a future release,” this isn’t an overnight emergency.

The most important date to circle is October 11, 2027—the end of general support for vSphere 8. This gives IT organizations a clear window to plan their next steps. I’ve also heard that if you reach out to your VMware account team, they can approve vVol support on VCF 9. Just treat it like a free vacation voucher… you’re going to have to sit through the vSAN timeshare presentation to get it.

The migration mechanism, if you are staying on VMware, is also straightforward: it’s a Storage vMotion.

This simplifies the move, leaving us with two critical questions which is mostly around planning:

  1. What do you move to?
  2. If you relied on specific vVols functionality, what are the best alternatives today?

In this blog, we will cover the following:

  • Review why you moved to vVols in the first place.
  • Understand the datastore types you can select.
  • Explore other FlashArray features that may provide vVol-like functionality.

Why did we Choose vVols in the First Place?

Several key capabilities inspired the switch to vVols. It’s important to think through the functionality that attracted you in the first place when planning your move.

Let’s cover the three most common reasons I’ve seen:

  • Storage Policy Based Management (SPBM): This capability allowed vCenter policies to be attached to VMs or objects, which in turn enabled services on the backend array. This granularity allowed for flexible, simplified management of storage features. Features like QoS, snapshots, and replication could be configured in the policy and applied to specific VMs based on their needs.
  • Moving Away from Clustered File Systems: A major architectural difference was the move away from VMFS, allowing the hypervisor and storage to work as one cohesive unit via the VASA framework. Instead of a collection of guest VMs sitting on a single datastore, each VM’s components became independently managed volumes. This was especially beneficial for database refresh workflows, allowing users to snap production databases and use them to overwrite dev/test systems.
  • Enhanced Monitoring: Since each VM and vDisk became an individual object on the array, statistics could be bubbled up on a per-object basis. This made troubleshooting storage less complex, making it easier to pinpoint not just which VMs were “screaming” from an I/O or bandwidth perspective, but which specific vDisk.

The Path Forward: A High-Level Summary

The good news is that most of these outcomes can still be achieved without vVols.

While we will be diving deeper, it’s helpful to summarize the alternatives up-front.

  1. You will need to align storage policies with datastores. Primarily, this means maintaining consistency for local snapshots, remote replication, and QoS at the datastore level.
  2. Unfortunately, we are looking at clustered file systems again. VMFS is the obvious choice, but as we will discuss, NFS is a potential alternative. RDMs (Raw Device Mappings) for databases, especially clustered ones, are a solid alternative for users who rely on database refresh workflows leveraging SAN tools.
  3. Per-VM analytics can be accomplished in one of two ways on Pure. You can use the VMware plug-in with VM analytics in Pure1 (for VMFS datastores), or you can use FlashArray on NFS with the VAAI plug-in to get per-VM statistics.

For some, this short summary may be enough to get started. But for those who love the details, let’s “double-click” on each of these points.

Storage Policies and Maintaining Consistency

This is arguably the most important part of preparing for a move. Storage Policy Based Management (SPBM) was a concept for a different era. It’s over a decade old, so it’s not surprising that many customers aren’t leveraging it extensively.

SPBM allows vCenter policies to be applied to individual Virtual Machines and its objects. These policies define the specific storage services a compatible SAN can expose, and the actual features exposed vary by array manufacturer. Ten to fifteen years ago, this was extensive. In the days of hybrid arrays with both flash and spinning disk, features included things like tier configuration, RAID type, cache allocation, snapshots, replication, and QoS. Some arrays were still having you choose the parity profile laid on the drives. Storage was fraught with complexity for good reason, the performance delta between flash and disk was huge.

Fast-forward to today, and SPBM makes less sense because modern arrays give us the following:

  • Modern arrays favor simplicity; features like Data Reduction and Ransomware Protection (SafeMode for Pure users) are enabled by default.
  • Modern arrays handle parity and dynamically balance performance and availability automatically.
  • Modern arrays for virtualization workloads run on all-flash media.

Because of this, the only real, configurable features in a storage policy for FlashArray users are snapshots, replication, and QoS. You can easily see which VMs have which policies under vCenter’s “Policies and Profiles” section.

Let’s put this into action by reviewing a couple of profiles and what they mean for migration.

A Deeper Look at Policies

First, let’s look at a custom policy. This policy uses local snapshots, as you can see in its definition:

The next view shows all the VMs currently using this policy:

Next, let’s look at the “default” policy. This policy is void of storage services and simply allows a VM to run on a vVol container:

And here are the VMs on that default policy:


The Takeaway: Matching Policy to Datastore

Using these policies as examples, you can get a feel for how to review your current environment to plan a move. Let’s pretend these policies are the two that make up your environment:

  1. All VMs on the first custom policy must be moved to a VMFS or NFS datastore that has the exact same snapshot profile as the policy. This ensures the same level of protection for your VMs.
  2. All VMs on the default policy can be moved to a plain datastore with no special services enabled (no snapshots, replication, etc.).

Accounting for Database Refresh Operations

Moving back to a clustered datastore model is probably the most disappointing part of this change. The idea of an object-oriented, integrated hypervisor and storage stack was one of the most compelling directions VMware was heading.

The good news is that most administrators know VMFS and the balancing act that comes with it. Plus, the maturity of VMFS is not to be understated; it was never a bad file system.

The primary issue to address is how some shops used vVols for efficient database refresh operations. Because vVols made each vDisk an independent volume, it was easy to retrieve a volume’s ID, snapshot it, then copy and overwrite it onto another system. This was an operation usually handled through the Purity API, offloading what would be a network-intensive operation onto the backend array.

You can continue doing these types of DB refresh operations, but you will need to review the existing scripts you leverage. There are two primary options moving forward:

  • All on VMFS: You can accomplish the same flow, but it requires more steps. Specifically: mounting a VMFS datastore from a snapshot, Storage vMotioning the VM/disk to the original datastore, and then dismounting the snapshot datastore. (Note: My colleague Andy Yun has been working on a PowerShell script for this. Keep an eye out on his blog or on the community github repository.)
  • Using Raw Device Mappings (RDMs): As I’ve written, RDMs offer a simpler approach to this specific database refresh operation. This may be one of the few compelling reasons left to leverage them, especially if you are using clustered databases like Oracle RAC or Windows FCIs.

If you’re on the VMware or storage team and are unsure if your company performs operations like this… you likely don’t. This consideration is probably not relevant to you.

What Datastore Do I Choose?

With vVols deprecated, the choice for a new datastore primarily comes down to VMFS (block) and NFS (file). The good news is that the “VMFS vs. NFS” debate is largely a relic of the past. On a modern, unified storage platform, both are highly performant and scalable. The VAAI advancements for both block (UNMAP, XCOPY) and NFS (full file clone, space reservation) mean that the technical gaps between them have narrowed significantly.

The choice today is less about performance and more about your operational preferences and specific application needs:

  • VMFS (Block): This is the classic, familiar choice for most VMware admins. It’s robust, mature, and ideal for nearly all workloads. If you have specific applications that still require shared block devices (like traditional Windows FCI or Oracle RAC clusters), you will likely lean toward VMFS, and as discussed previously use RDMs where they make sense.
  • NFS (File): Historically, NFS was favored for its simplicity in management and scalability—no LUNs to manage, just a single, large datastore. Modern NFS has closed the performance gap significantly compared to block and is an excellent choice, especially if your team likes the idea of less datastores to manage.

Ultimately, you can’t go wrong with either. The decision now rests on what best fits your team’s existing workflows and any specific application clustering requirements you may have. The great part is that modern arrays support both file and block on the same device, so combinations that fit your needs are possible!

Per-VM Monitoring

The final item to review is how to replace the per-VM monitoring capability we enjoyed with vVols. Because VMFS uses shared datastores, the backend array typically only sees the aggregate performance of the LUN, not the specific IO/bandwidth of an individual VM.

However, there are two excellent ways to regain this level of granular visibility:

  • VM Analytics in Pure1: Whether you choose VMFS or NFS, VM Analytics in Pure1 will report metrics down to the individual virtual disk. While this data isn’t viewed directly inside vCenter and requires an OVA to send the data, it is incredibly powerful. Pure customers with the vSphere Plug-in are likely already familiar with this virtual appliance and can have it set up in no time. (Check out this blog on how to install the collector here.) As a sneak peek, here are some views that demonstrate the level of detail available. Note: This is near real-time, so allow a small buffer for data processing.
  • Leverage NFS with Auto-Directory: If you want metrics associated with specific VMs directly on the FlashArray, you can accomplish this using NFS. (I’ve written extensively with a colleague on how NFS has changed for VMware workloads, which you can read here.) This capability allows each VM to be its own managed directory on the FlashArray, enabling statistics to be shown individually. Here are some examples of this view:

Wrap-Up

I hope this guide helps you map out your migration plan off of vVols. The great news is that its not a massive lift, and there are technologies to fill the gaps.

This post was specifically designed for the user planning to stay on VMware, focusing on the tools and offerings available within that ecosystem. (If you are looking at other paths, there are great resources available, like this guide on converting vVol volumes to Cinder volumes for OpenStack).

In a follow-up blog, I intend to cover how Pure Fusion can be leveraged to replace the SPBM functionality we discussed today.