In this blog we will be setting up ProxMox with Pure Storage as a central iSCSI LVM device that provides shared storage for a Datacenter Cluster. It’s my personal opinion that NFS is far easier to setup and can still be a performant option with nconnect enabled. I would recommend reviewing using NFS on FlashArray for Proxmox for simplicity. Before we get started, let’s understand the pre-requisite pieces that have been done already:
- At least 3 nodes deployed with Proxmox
- 3 nodes joined into a cluster
- Pure Storage FlashArray deployed
- iSCSI network configured
Almost all configuration will be done in a shell on the Proxmox hosts, this can be done in the shell on the webui or over a typical ssh session. Let’s jump in!
1. Configure Proxmox Hosts into Purity and make a LUN
First, retrieve the iSCSI initiator name (IQN) from each Proxmox host.
cat /etc/iscsi/initiatorname.iscsi
Use this IQN to create host objects in the Pure Storage Purity management interface. You will make a Host by going to Storage->Hosts and click the “+” icon. Give it a name and under personality select “none.” Once you have created a host, you will add the Proxmox host IQN under the “Host Ports” section. Create a new volume under the Storage->Volumes and click the “+” icon. Create the volume size needed and add your newly created hosts to this LUN.


Once the hosts are configured in Purity, log in to the iSCSI targets from each Proxmox host.
iscsiadm -m node -L all
2. Install Multipath Tools
To ensure high availability and load balancing for your iSCSI storage, install the multipath tools on each Proxmox host.
apt update
apt install multipath-tools
3. Identify the Pure Storage WWID
Next, you need to identify the World Wide Identifier (WWID) of the Pure Storage volume. After logging into the iSCSI targets, new block devices will appear (e.g., /dev/sdb
, /dev/sdc
, etc.). Use the following command to get the WWID, replacing /dev/sdb
with one of the new device names.
/lib/udev/scsi_id --whitelisted --device=/dev/sdb
This will output the WWID, which should look something like this:
3624a9370730d187406c14775008ef137
4. Configure Multipath
Add the WWID to the multipath configuration file. Do not edit the file directly. Use the following command, replacing <<wwid>>
with the actual WWID you retrieved.
multipath -a <<wwid>>
Now, create the multipath.conf
file in /etc/multipath.conf
with the following content. This configuration blacklists local devices and sets the correct parameters for Pure Storage.
blacklist {
devnode "^(sda|sr|nvme|loop|fd|hd).*$"
}
defaults {
polling_interval 10
}
devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
}
device {
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
user_friendly_names no
no_path_retry 0
features 0
}
}
Start the multipath service.
systemctl start multipathd
systemctl restart multipathd
Verify that the multipath device is correctly configured and active.
multipath -ll
The output should show the Pure Storage device with its paths active and ready.
3624a9370730d187406c14775008ef137 dm-5 PURE,FlashArray
size=100G features='0' hwhandler='1 alua' wp=rw
-+- policy='service-time 0' prio=50 status=active
|- 33:0:0:1 sdb 8:16 active ready running
|- 34:0:0:1 sdc 8:32 active ready running
|- 35:0:0:1 sde 8:64 active ready running
- 36:0:0:1 sdd 8:48 active ready running
Keep in mind this will need to be performed on all hosts in the cluster
5. Create Shared Storage (LVM-Thin)
To use the storage as a shared resource for all Proxmox nodes, create an LVM-Thin pool. Perform these steps on only one Proxmox host.
First, identify the multipath device name using lsblk
. The device will be listed under /dev/mapper/
.
lsblk
Use the device name (in this case, /dev/mapper/3624a9370730d187406c14775008ef137
) to create the physical volume, volume group, and LVM-Thin pool.
pvcreate /dev/mapper/3624a9370730d187406c14775008ef137
vgcreate pure-storage-vg /dev/mapper/3624a9370730d187406c14775008ef137
lvcreate -l 100%FREE --thinpool thinpool pure-storage-vg
6. Activate Storage on Other Proxmox Nodes
The newly created LVM-Thin pool will be visible in the Proxmox GUI on all nodes, but it may show as “unknown” on the other nodes. To activate it, run the following commands on the remaining Proxmox hosts.
vgscan
vgchange -ay
If the storage is still not showing correctly, you may need to ensure the iSCSI service is running and that the host is logged into the iSCSI targets. This can happen after a reboot which is what i went ahead and did this this scenario.
To check the status of the iSCSI service:
systemctl status iscsid
If it’s inactive, restart it:
systemctl restart iscsid
Then, log in to the iSCSI targets again:
iscsiadm -m node -L all
After these steps, the shared LVM-Thin storage will be fully available and correctly configured on all Proxmox nodes in the cluster, ready for use with virtual machines. It should show up in the WebUI to the effect of this:

From here, you could test High Availability by creating Virtual Machines on this volume and turning off a host. If everything is configured correctly, you should see it reboot on another host after a couple of minutes. If you have found errors or have some feedback for me, please reach out to me on linkedin!