Proxmox VE 5.3 released
Posted on: 12/05/2018 09:36 AM

Proxmox Server Solutions has released a new version of their virtualization solution Proxmox VE. The new version based on Debian GNU/Linux 9.6 with a modified Linux Kernel 4.15

Proxmox VE 5.3 released

Proxmox Server Solutions GmbH today unveiled Proxmox VE 5.3, its latest open-source server virtualization management platform. Proxmox VE is based on Debian Stretch 9.6 with a modified Linux Kernel 4.15. Ceph Storage has been updated to version 12.2.8 (Luminous LTS, stable), and is packaged by Proxmox.

Proxmox VE and CephFS
Proxmox VE 5.3 now includes CephFS in its web-based management interface thus expanding its comprehensive list of already supported file and block storage types. CephFS is a distributed, POSIX-compliant file system and builds on the Ceph cluster. Like Ceph RBD (Rados Block Device), which is already integrated into Proxmox VE, CephFS now serves as an alternative interface to the Ceph storage. For CephFS Proxmox allows storing VZDump backup files, ISO images, and container templates. The distributed file system CephFS eliminates the need for external file storage such as NFS or Samba and thus helps reducing hardware cost and simplifies management.

The CephFS file system can be created and configured with just a few clicks in the Proxmox VE management interface. To deploy CephFS users need a working Ceph storage cluster and a Ceph Metadata Server (MDS) node, which can also be created in the Proxmox VE interface. The MDS daemon separates metadata and data from each other and stores them in the Ceph file system. At least one MDS is needed, but its recommended to deploy multiple MDS nodes to improve high availability and avoid SPOF. If several MDS nodes are created only one will be marked as ‘active’ while the others stay ‘passive’ until they are needed in case of failure of the active one.

Further Improvements in Proxmox VE 5.3
Proxmox VE 5.3 brings many improvements in storage management. Via the Disk management it is possible to easily add ZFS raid volumes, LVM, and LVMthin pools as well as additional simple disks with a traditional file system. The existing ZFS over iSCSI storage plug-in can now access LIO target in the Linux kernel. Nesting is enabled for LXC containers making it possible to use LXC or LXD inside a container. Also, access to NFS or CIFS/Samba server can be configured inside containers. For the keen and adventurous user, Proxmox VE brings a simplified configuration of PCI passthrough and virtual GPUs (vGPUs such as Intel KVMGT)–now even possible via the web GUI.

Countless bugfixes and smaller improvements are listed in the release notes and can be found in detail in the Proxmox bugtracker or in the Git repository.

Printed from Linux Compatible (