Proxmox Hosting

Best method for setting up a Proxmox VE guest - Linux

When creating a virtual machine in Proxmox VE, you may be wondering what the best options are. This article will guide you through the ideal options for guests on our hardware. 

Notes:

  • We like to enable advanced settings so you will see those options in our screenshots. In some cases, you may need to set options that are considered "advanced".
  1. Click on Create VM and provide a valid VM ID and Name. We recommend enabling "Start at boot" so if your node is rebooted, your VM will automatically start. This is optional and can be changed later.
  2. Select your ISO file and configure the Guest OS settings appropriately. If you leave the type as Linux and try to install Windows, Windows will not properly install.
  3. Use the default graphics selection for standard console servers (no GUI). If you're going to use a GUI on your Linux VM, you may see better performance with VMware compatible. VirtIO-GPU and VirGL GPU can offload workloads to the host GPU, but bear in mind that these are enterprise servers and only have basic on-board graphics. We have seen issues with Ubuntu Server having problems booting with VMware compatible graphics. You may want to leave it as Default until the OS is installed, then go back and change the graphics later. For Machine, the default i440fx will passthrough hardware as PCI instead of PCIe. q35 will passthrough PCIe natively. However, since these are enterprise servers and they do not have any PCIe expansions filled, you can leave the default i440fx. For SCSI, using VirtIO SCSI single will provide better performance. We also recommend enabling the Qemu Agent. Select OVMF if you wish to use UEFI, otherwise SeaBIOS is default for legacy. If you wish to use built-in OS encryption, tick to add a TPM. You can select either version 1.2 or 2.0.
  4. Depending on your storage needs, for most systems, SATA will work. If you're planning on adding more than 6 drives, select SCSI. Important: By default, Proxmox VE will present the disks to the guest OS as HDD. If you are placing the disk on SSD storage, enable SSD emulation so the guest OS will see the drive as a SSD and will act accordingly (e.g. random read/writes vs sequential for more even SSD wear). Even though these are enterprise SSDs, you can wear them down faster by running guests on them that do not know they are on SSDs. If you are running them on HDD, then you do not need to enable SSD emulation. 
  5. CPU sockets and cores can be adjusted if needed. Modern operating systems are multi-threaded and we recommend no less than 2 CPU cores. If you're expecting a higher workload, you can increase the core count. From a performance standpoint, it's mostly irrelevant if you have 1 CPU socket and 2 CPU cores or 2 CPU sockets with 1 CPU core. This feature is mostly just for software licenses that license based on CPU socket count. The total number of CPU cores across all VMs can exceed the total number of CPU cores on the host system, but one VM cannot exceed the total number of CPU cores on the host. For example, if your host has 24 cores, you can have 5 VMs with 5 CPU cores each which totals 25 overall cores. But 1 VM cannot exceed 24 cores. For CPU type, the default kvm64 works, although Red Hat Enterprise Linux 9 (and subsequently AlmaLinux 9, Rocky Linux 9, etc.) have a driver change by Red Hat that causes kernel panics and the CPU type needs to be changed to Host in order to boot properly. 
  6. Set memory for the VM. Depending on the workload and tasks that are going to be running, for a non-GUI VM, 2048 (2 GB) should be the bare minimum. For a VM that will run GNOME, KDE, Cinnamon, or another similar desktop environment, 4096 (4 GB) is the recommended bare minimum. LXDE or other similar "lite" DEs, 2 GB should suffice as the minimum. Keep in mind that Proxmox VE also uses KSM (Kernel Samepage Merging) which is a memory deduplication feature. This allows the host to re-map identical virtual pages to the same physical page and allows you to overallocate memory. This does have security risks such as side-channel attacks. 
  7. Set the network adapter model to VirtIO for best performance. Set your appropriate bridge and disable the firewall (unless you plan on using the feature). 

Performance Testing

After running some performance tests, here are some comparisons. On the left is an Ubuntu Server 22.04 LTS server with all the Proxmox VE defaults. On the right is an identical Ubuntu Server 22.04 LTS server with the configuration we have provided above. 

These are the disk results. On the left, the server with the Proxmox VE defaults. On the right, the server configured using the options above. These tests were performed on enterprise-grade WD HDDs.

Article Information
  • Article ID: 304
  • Category: Proxmox
  • Viewed 1,500 times.
  • Rating:
    (0)
  • Was this article helpful?
  • Yes No
Did you find this article helpful?