This blog provides the unofficial steps to configure Proxmox VE to use an NFS share from a Pure Storage FlashArray. Configuring this will allow you to have a shared volume for your proxmox hosts so that you can leverage features like High Availability. I will also include performance optimization with nconnect
. This is my preferred configuration as iSCSI I find to be more manual and cumbersome. However if you prefer to setup with iSCSI you can check that out here. A couple of pre-requisites to let you know our “starting point:”
- Proxmox hosts installed and configured into a proxmox cluster
- FlashArray installed and configured with the file services enabled
- File network configured
Part 1: Configure NFS on Pure Storage Purity
The workflow in Purity involves creating a filesystem, defining an export policy, and then creating a directory that uses that policy.
1. Create a Filesystem
First, create a new filesystem on the FlashArray to serve as the container for your NFS shares. You can do this by going to Purity under Storage->File Systems and then clicking the “+” icon.
2. Create an NFS Export Policy
Next, define a policy that will govern access to the NFS export.
- Navigate to Storage -> Policies and create a new policy.
- Add a new rule with the following settings:
- Client: Specify the IP address, subnet, or hostname of your Proxmox hosts (e.g.,
10.21.102.0/24
). - Access: Set to
read-write
. - Permission: Set to
no-root-squash
. This allows the root user on the Proxmox host to have root privileges on the NFS mount, which is often required for storing virtual machine images. - NFS Version: Select your desired NFS version (e.g.,
NFSv4
).
- Client: Specify the IP address, subnet, or hostname of your Proxmox hosts (e.g.,
3. Create a Directory and Apply the Policy
Now, create the specific directory that Proxmox will mount and apply the policy to it.
- Navigate to Storage -> Filesystems and select the filesystem you created.
- Within the filesystem, create a new directory. This directory name will be your NFS export path (e.g.,
/pxmx-nfs-dir-01
). - Apply the export policy you created in the previous step to this directory.

Part 2: Configure Proxmox VE
With the NFS share configured on the FlashArray, you can now add it as storage in Proxmox.
1. Add NFS Storage
- In the Proxmox VE web interface, navigate to the Datacenter -> Storage view.
- Click Add and select NFS.
- Fill in the required fields:
- ID: A descriptive name for the storage in Proxmox (e.g.,
pxmx-nfs-dir-01
). - Server: The IP address of the NFS interface on your Pure Storage FlashArray. (see below if you need to figure out how to find the ip address of the VIF.)
- Export: The full path to the directory you created on the FlashArray (e.g.,
/pxmx-nfs-dir-01
). - Content: Select all content types you intend to store (e.g.,
Disk image
,ISO image
,Container
).
- ID: A descriptive name for the storage in Proxmox (e.g.,
- Click Add. The storage will now be available to all nodes in the cluster.
Note: If you aren’t sure what your file VIP is, you can find this under Purity in Settings->Network->Connectors under the Ethernet section you should see a “filevip” and is associated ip

2. Optimize Performance with nconnect
For improved throughput, it is highly recommended to enable the nconnect
mount option. This allows a single NFS mount to use multiple TCP connections.
Run the following command on any Proxmox host to add the option to the storage configuration. Replace pxmx-nfs-dir-01
with the ID of your storage and 8
with the desired number of connections.
pvesm set pxmx-nfs-dir-01 --options nconnect=8
This change is automatically propagated to all nodes in the cluster.
Part 3: Verification
Verify that the configuration has been applied correctly.
1. Check the Storage Configuration
View the contents of the Proxmox storage configuration file to ensure the nconnect
option is present.
cat /etc/pve/storage.cfg
The entry for your NFS storage should now include the options nconnect=8
line:
nfs: pxmx-nfs-dir-01
export /pxmx-nfs-dir-01
path /mnt/pve/pxmx-nfs-dir-01
server 10.21.228.58
content iso,rootdir,images
options nconnect=8
prune-backups keep-all=1
2. Verify the Live Mount and Connections
On any Proxmox node, check the active mount options to see that nconnect
is in use.
Bash
cat /proc/mounts | grep pxmx-nfs-dir-01
The output should show nconnect=8
in the list of mount options:
10.21.228.58:/pxmx-nfs-dir-01 /mnt/pve/pxmx-nfs-dir-01 nfs4 rw,relatime,...,nconnect=8,...
Finally, verify that multiple TCP connections are established to the NFS server’s port (2049). Replace <NFS_SERVER_IP>
with your Pure Storage NFS IP address.
Bash
ss -an | grep 'ESTAB' | grep '<NFS_SERVER_IP>:2049'
You should see multiple established connections from the Proxmox host to the NFS server, confirming that nconnect
is working correctly.
tcp ESTAB 0 0 10.21.102.91:687 10.21.228.58:2049
tcp ESTAB 0 0 10.21.102.91:745 10.21.228.58:2049
tcp ESTAB 0 0 10.21.102.91:802 10.21.228.58:2049
tcp ESTAB 0 0 10.21.102.91:742 10.21.228.58:2049
tcp ESTAB 0 0 10.21.102.91:860 10.21.228.58:2049
tcp ESTAB 0 0 10.21.102.91:1005 10.21.228.58:2049
tcp ESTAB 0 0 10.21.102.91:744 10.21.228.58:2049
tcp ESTAB 0 0 10.21.102.91:772 10.21.228.58:2049