Skip to main content

#019 - Moving Disks Between TrueNAS VMs in Proxmox

·973 words·5 mins· loading · loading ·

The Challenge
#

I wanted to perform a fresh installation of TrueNAS SCALE and migrate the disks from a previous TrueNAS CORE setup. Since the old TrueNAS CORE installation (running on VM 100) managed the disks itself—rather than Proxmox—I first had to detach those disks from VM 100 and then attach them to the new VM (ID 103), which would be running TrueNAS SCALE. Below, I’ll show the steps I took to accomplish this.

IMPORTANT: The most important point to remember is to back up any encryption keys from the old TrueNAS CORE installation before detaching the disks, so you can successfully import the pool on the new TrueNAS SCALE environment.


Solution Overview
#

Proxmox Disk Types
#

  • Proxmox-Managed Disks: Appear in VM config as local-zfs:vm-<ID>-disk-<X> or local-lvm:vm-<ID>-disk-<X>. These can be moved with the built-in Proxmox command qm move-disk.
  • Pass-Through/Raw Disks: Appear as /dev/disk/by-id/ata-... or /dev/sdX. These are not managed by Proxmox storage. You must detach them from the old VM and attach them to the new VM manually.

Steps to Move Disks
#

  1. Shut down the source VM (VM 100).
  2. Detach Proxmox-managed disks with qm move-disk or pass-through disks by removing them from the VM config (qm set 100 -delete scsiX).
  3. Attach those disks to the target VM (VM 103) using qm set 103 -scsiX ....
  4. Confirm the VM config (via qm config 103) to ensure disks are recognized.
  5. Start the new VM (VM 103) and proceed with any OS-level configuration (e.g., importing a ZFS pool in TrueNAS).

Full Commands for Pass-Through Disks
#

Let’s say you have pass-through disks on VM 100 that appear as /dev/disk/by-id/ata-DISKID1, /dev/disk/by-id/ata-DISKID2, etc.:

# (Optional) Check which disks are currently assigned to VM 100
qm config 100

# 1) Shut down VM 100
qm stop 100

# 2) Detach each pass-through disk from VM 100
qm set 100 -delete scsi1
qm set 100 -delete scsi2
qm set 100 -delete scsi3
qm set 100 -delete scsi4
qm set 100 -delete scsi5
qm set 100 -delete scsi6
qm set 100 -delete scsi7

# 3) Attach these same disks to VM 103
qm set 103 -scsi1 /dev/disk/by-id/ata-DISKID1
qm set 103 -scsi2 /dev/disk/by-id/ata-DISKID2
qm set 103 -scsi3 /dev/disk/by-id/ata-DISKID3
qm set 103 -scsi4 /dev/disk/by-id/ata-DISKID4
qm set 103 -scsi5 /dev/disk/by-id/ata-DISKID5
qm set 103 -scsi6 /dev/disk/by-id/ata-DISKID6
qm set 103 -scsi7 /dev/disk/by-id/ata-DISKID7

# 4) Confirm new config
qm config 103

# 5) Start VM 103
qm start 103

If instead you are moving a Proxmox-managed disk (for example, local-zfs:vm-100-disk-0), use:

# 1) Shut down VM 100
qm stop 100

# 2) Move the disk using Proxmox's built-in command
qm move-disk 100 scsi0 local-zfs --target-vmid 103

# 3) Check config
qm config 103

# 4) Start VM 103
qm start 103

Bash Script Example
#

Below is a basic script (named move-disks.sh) that detaches several pass-through disks from VM 100 and attaches them to VM 103 automatically. Replace DISKID1, DISKID2, etc. with your actual device IDs.

#!/usr/bin/env bash
#
# move-disks.sh
# Detach specific SCSI devices from VM 100 and attach them to VM 103.
# ---------------------------------------------------------------

# Optional: stop VM 100 (uncomment if you want the script to do this)
# qm stop 100

echo "Detaching disks from VM 100..."
qm set 100 -delete scsi1
qm set 100 -delete scsi2
qm set 100 -delete scsi3
qm set 100 -delete scsi4
qm set 100 -delete scsi5
qm set 100 -delete scsi6
qm set 100 -delete scsi

echo "Attaching disks to VM 103..."
qm set 103 -scsi1 /dev/disk/by-id/ata-DISKID1
qm set 103 -scsi2 /dev/disk/by-id/ata-DISKID2
qm set 103 -scsi3 /dev/disk/by-id/ata-DISKID3
qm set 103 -scsi4 /dev/disk/by-id/ata-DISKID4
qm set 103 -scsi5 /dev/disk/by-id/ata-DISKID5
qm set 103 -scsi6 /dev/disk/by-id/ata-DISKID6
qm set 103 -scsi7 /dev/disk/by-id/ata-DISKID7

echo "Done! Verify with:"
echo "  qm config 100"
echo "  qm config 103"
  1. Save the file as move-disks.sh.
  2. Make it executable: chmod +x move-disks.sh.
  3. Run: ./move-disks.sh.

After running, you can confirm the changes by checking the hardware list in the Proxmox UI or by issuing qm config 103.


Importing a ZFS Pool in TrueNAS
#

Once the disks are attached to your TrueNAS VM (VM 103), TrueNAS will see them as physical disks. To import an existing ZFS pool (even an encrypted one) that resides on those disks, follow one of the methods below.

1) Via the Web Interface
#

  1. Log in to TrueNAS (web UI).
  2. Go to Storage → Pools.
  3. Click Add (or Add Pool), then select Import an existing pool.
  4. TrueNAS will list any detected pools on the newly attached disks.
  5. Select the pool you want to import, then click Import.
  6. If the pool is encrypted, you may be prompted for a key or passphrase. Provide the necessary information to unlock the pool.
  7. Once imported (and unlocked, if necessary), verify the pool is listed under Storage → Pools.

2) Via the Shell
#

  1. Open the TrueNAS Shell (left menu or SSH).
  2. Run:
    zpool import
    
    This shows pools available for import.
  3. Import your pool:
    zpool import <poolname>
    
    • If the pool was not cleanly exported, you might need to force it:
      zpool import -f <poolname>
      
  4. If the pool is encrypted, unlock it using:
    zfs load-key <poolname>
    
    You may need to provide a key file or passphrase, depending on your encryption setup.
  5. Confirm the import status:
    zpool status
    
    You should see the newly imported pool and its disks.

Note: Always keep a secure backup of your encryption key or passphrase. If you lose it, you will not be able to unlock or access your data on the encrypted pool.


Conclusion
#

By properly detaching and attaching disks in Proxmox, you can reassign physical disks or Proxmox-managed disks from one VM (ID 100) to another (ID 103). If you’re passing these disks into TrueNAS, simply follow the import pool steps to make them accessible inside TrueNAS. This straightforward process makes it easy to migrate storage or reorganize your disk usage across VMs while retaining all data and ZFS configurations.