The Unofficial Proxmox & Pure Storage Cookbook: iSCSI with Multipathing

In this blog we will be setting up ProxMox with Pure Storage as a central iSCSI LVM device that provides shared storage for a Datacenter Cluster. It’s my personal opinion that NFS is far easier to setup and can still be a performant option with nconnect enabled. I would recommend reviewing using NFS on FlashArray for Proxmox for simplicity. Before we get started, let’s understand the pre-requisite pieces that have been done already:

  • At least 3 nodes deployed with Proxmox
  • 3 nodes joined into a cluster
  • Pure Storage FlashArray deployed
  • iSCSI network configured

Almost all configuration will be done in a shell on the Proxmox hosts, this can be done in the shell on the webui or over a typical ssh session. Let’s jump in!

1. Configure Proxmox Hosts into Purity and make a LUN

First, retrieve the iSCSI initiator name (IQN) from each Proxmox host.

Use this IQN to create host objects in the Pure Storage Purity management interface. You will make a Host by going to Storage->Hosts and click the “+” icon. Give it a name and under personality select “none.” Once you have created a host, you will add the Proxmox host IQN under the “Host Ports” section. Create a new volume under the Storage->Volumes and click the “+” icon. Create the volume size needed and add your newly created hosts to this LUN.

Volume view of the Purity configuration
Host view of Purity configuration

Once the hosts are configured in Purity, log in to the iSCSI targets from each Proxmox host.

2. Install Multipath Tools

To ensure high availability and load balancing for your iSCSI storage, install the multipath tools on each Proxmox host.

3. Identify the Pure Storage WWID

Next, you need to identify the World Wide Identifier (WWID) of the Pure Storage volume. After logging into the iSCSI targets, new block devices will appear (e.g., /dev/sdb, /dev/sdc, etc.). Use the following command to get the WWID, replacing /dev/sdb with one of the new device names.

This will output the WWID, which should look something like this:

3624a9370730d187406c14775008ef137

4. Configure Multipath

Add the WWID to the multipath configuration file. Do not edit the file directly. Use the following command, replacing <<wwid>> with the actual WWID you retrieved.

Now, create the multipath.conf file in /etc/multipath.conf with the following content. This configuration blacklists local devices and sets the correct parameters for Pure Storage.

Start the multipath service.

Verify that the multipath device is correctly configured and active.

The output should show the Pure Storage device with its paths active and ready.

Keep in mind this will need to be performed on all hosts in the cluster

5. Create Shared Storage (LVM-Thin)

To use the storage as a shared resource for all Proxmox nodes, create an LVM-Thin pool. Perform these steps on only one Proxmox host.

First, identify the multipath device name using lsblk. The device will be listed under /dev/mapper/.

lsblk

Use the device name (in this case, /dev/mapper/3624a9370730d187406c14775008ef137) to create the physical volume, volume group, and LVM-Thin pool.

pvcreate /dev/mapper/3624a9370730d187406c14775008ef137
vgcreate pure-storage-vg /dev/mapper/3624a9370730d187406c14775008ef137
lvcreate -l 100%FREE --thinpool thinpool pure-storage-vg

6. Activate Storage on Other Proxmox Nodes

The newly created LVM-Thin pool will be visible in the Proxmox GUI on all nodes, but it may show as “unknown” on the other nodes. To activate it, run the following commands on the remaining Proxmox hosts.

If the storage is still not showing correctly, you may need to ensure the iSCSI service is running and that the host is logged into the iSCSI targets. This can happen after a reboot which is what i went ahead and did this this scenario.

To check the status of the iSCSI service:

If it’s inactive, restart it:

Then, log in to the iSCSI targets again:

After these steps, the shared LVM-Thin storage will be fully available and correctly configured on all Proxmox nodes in the cluster, ready for use with virtual machines. It should show up in the WebUI to the effect of this:

From here, you could test High Availability by creating Virtual Machines on this volume and turning off a host. If everything is configured correctly, you should see it reboot on another host after a couple of minutes. If you have found errors or have some feedback for me, please reach out to me on linkedin!