Kvm qcow2 vs raw reddit.
Lvm direct or raw/qcow2? .
Kvm qcow2 vs raw reddit Creating a new . Check if vm starts fine then, from proxmox ui, move the drive to the main zfs datastore. a KVM guest using a LVM logical volume as storage?" I've never heard of "KVM logical volume". However, here’s a quick guide to help you select: Choose raw if: Absolute performance is the top priority. qcow2 from? Due to its size right now it resides on a cifs share. Or check it out in the app stores TOPICS KVM is open source software that enables you to run virtual machines at near physical hardware speed on native linux machines. Since I don't think its possible to convert with specific log/phys sizes I can instead run certain VMs as is (in qcow2 format). Is it possible/safe to convert between raw and qcow2 disk format? Any special considerations/expected issues with doing this? Please include code examples if possible, I'm no qemu-img convert has always worked for us, but we're always convering from VHDX or VMDK into . vdi how to slip a username / password into it ? Hello all! I've started to play around with ZFS and VMs. And it looks like on random access its the same for all raw types. Proxmox Virtual Environment. Typing FS0: and ls shows 0 files for the two raw images. qemu-img convert -f raw -O qcow2 image. is the virtio driver that low level though? i'd have thought the guest sector size vs the host is irrelevant if you're using qcow2, or are you using lvm (not sure its relevant there either). I set 2TB as size but when I type du -h to get the real size of the disk image it shows 626GB on the host. converted an ova to qcow2 and for some reason ram usage in my host is very high, even with all the guests shutdown, it reaches 99% usage; if i turn So I am trying out running a window 10 VM guest from within a linux host using QEMU, which runs on top of a SATA SSD drive. qcow2 10G I'm not an expert but, the performance in my Windows 10 VM increased drastically by changing from qcow2 disk to raw image, and of course having the host and guest in a SSD it's a must Also make sure to install all virtio-win drivers View community ranking In the Top 10% of largest communities on Reddit. I've long preferred qcow2 but thought I was giving up some marginal speed doing so. The two major disk image formats, widely recommended and used are raw and qcow2. Selected zvol as disk for virtual machine and when booting in TrueNAS scale, it only shows the UEFI shell. qcow2, and not the other way. https://www MAME is a multi-purpose emulation framework it's purpose is to preserve decades of software history. That might very well have been my blog you read, and if so, I did a jesus load of testing to find out whether there was any real performance benefit to the more complex "raw" or nearly raw storage methods that were frequently being boasted about, and determined that there really weren't, and you should just use qcow2 files because they do "just work" with little or no performance penalty. Raw for performance. After some searching you can convert the VHDX to a qcow2 file and this allowed me to boot the image. qcow2 new-vm-disk. For example, btrfs still can't actually tell you the real disk usage or free space. Is there a way to get a Debian distribution, with the requirement that the image has to be either qcow2 or raw? By the way, my KVM virtualized platform is NFX, so it is not a Virtualbox, and not EC-2 based. Important: LVM and raw files are two different things. Data recovery is a major concern, and simplicity is preferred. S: qcow2 and raw images do have their use, setting them up as sparse images on servers with lots of VMs can save lots of storage. Since proxmox uses . The bottom line? qcow2 is good enough to use and I like it cuz the flexibility it offers. /test-with-zstd. Performance might be better with truncate to create the raw file, in the short term. I pretty extensively benchmarked qcow2 vs zvols, raw LVs, and even raw disk partitions and saw very little difference in performance. qcow2 -O vdi debian-11-generic-amd64. You trade quite some features of qcow2 for that (sparse allocation, snapshots, ). This subreddit has gone Restricted and reference-only as part of a mass protest against Reddit's recent API changes, which break third-party apps and moderation tools. The disk performance on the VM is poor compare to the bare metal (appears to be around 1/3 the speed). Or check it out in the app stores A Personal Ramble on KVM vs ESXi and why I finally switched to KVM for HomeLab For me to create a raw disk image I have to open Virt-Manager and change the default from qcow2 to raw, create the VM, then go back and switch the default That's what my tired brain is guessing. I heard . If you've found nothing in this topic your google-fu must suck! Proxmox vs Debian w/KVM+Cockpit . raw -O qcow2 -p -S 512 -c kvm. Because of the volblocksize=32k and ashift=13 (8k), I also get compression (compared to Tried: 1) qemu-img convert to raw from VDI, 2) VBoxManage cloneHD --format=RAW, and 3) qemu-img convert to raw and then qcow2 in case KVM prefers that. A. For example i see the CPU GHz boost in task manager. Don't know why but it did. Download the KVM (qcow2) disk from here. You need to rename the extension form . When you convert that 10TB qcow2 to a raw image, you will need 10TB of space free on where you are writing it to. , see the answer Get the Reddit app Scan this QR code to download the app now. Ideally, run TRIM inside your guest first to mark unused space as not allocated. This subreddit has gone Restricted and reference-only as part of a mass I was planning to convert to raw but the result is a raw image of 512/512. 30 GHz, 8M Cache, Turbo, Quad Core/8T (69W) Memory:16GB Memory (4x4GB), 1600Mhz, Dual Ranked, Low Volt UDIMM (speed is CPU I have 2 ZFS over iSCSI shared storage devices and then just a few gigs of local storage. qcow2 performance is It was suggested to me on Reddit that using RAW instead of QCOW2 will allow the VM to be faster and have better performance. What do all you KVM/Proxmox users recommend? I've looked around and see differing opinions all over the place. I don't know whether the following options are required to be stated at the time of creating the qcow2 image or if they can be added to a configuration file used by kvm during image mount and boot? lazy-refcounts cache-size l2-cache-size Then for the good old raw vs qcow2 discussion. But it View community ranking In the Top 5% of largest communities on Reddit. its comparing qcow2 and zvol. I'm not sure the best way you could modify your code to avoid it. also qcow2 supports native vm snapshots under libvirt Thanks alot. 1. Once done, remove the temporary datastore and from the cli remove the zfs tilesystem: Well qemu isn't raw image of the disk . They're on the same physical drive, but using a raw lvm lv reduces filesytem overhead so it Anyone have experience migrating from KVM to Hyper-v? I did search on internet but can't find a good article on how to migrate from kvm to hyper-v, any solutions? as an example, will convert your image to VHDX format. Perhaps the speed is to be expected. Note: Reddit is dying due to terrible leadership from CEO /u/spez. Raw and qcow2 images have their pros, cons, and unique features. Now copy the qcow2 disk you want to import over the dummy drive, via scp or your method of choice. In other words, it seems this is only available for the create subcommand, not the convert one. Really needed those modifications for my work to proceed. Choose qcow2 if: Get the Reddit app Scan this QR code to download the app now. * as part of a joint protest to Reddit's recent API changes, which breaks third-party apps and moderation tools, effectively forcing users to use the official Reddit app. Reply More posts you may like. I'd change this. qcow2. raw -O vhdx hyperv. qcow2 instead is just growing, if you put more data in it What's with using QCOW2 over RAW? * Thanks for your help! Edit: My use case is Windows 10 gaming with GPU passthrough. proxmox it self is up to date on non subscription repo. Or check it out in the app stores Chrome OS Flex can be emulated in QEMU with KVM and virt-manager. which you would loose with raw files. qcow2 volume1 --format raw I get this error: It seems to hinge on whether I want to use RAW or qcow2. Which would be the best qemu-kvm image format that would allow snapshots to be taken efficiently? I am thinking fully-provisioned qcow2, but is there a more appropriate one to use with zfs? If fully-provisioned qcow2 was the correct choice, should I turn compression off for the dataset that stores the qcow2 files? Well qcow2 disk images are thinly provisioned, so you could make it a 10TB drive, and if you only write 1 GB to it, it will only use that much space (with very little overhead). 2 and on server 1 I have a few KVM virtual machines. qcow2 has better snapshot support (with redirect-on-write and such), raw is of course just storing the bytes so doesn't have any special support at all. 4 with KVM Processor:Intel Xeon E3-1230v2 3. That way you have the copy-on-write feature of thinpools and LUKS qemu-img convert -p -c -S 4k -O qcow2 oldfile. I mounted a samba share on server 2 but when I'm trying the command qm importdisk 101 diskimg. What I'm struggling with is "where" do I upload these images on I use Proxmox on the server no. I don't know for certain, but based on how Red Hat does its KVM (qcow2) images I'd guess that: LVM consists of an install using LVM, where /dev/sda2 is a LVM PV, then a VG which contains LVs for the root fs and swap RAW consists of an install with no LVM, where /dev/sda2 contains the root fs. qcow2 performance is close to raw with the newer versions, and both are pretty much equally stable, but if you don't need the qcow2 features at all then I suppose you might as well go with raw (although lvm Before setting up this new server, I wanted to benchmark different combination of record sizes (i. qcow2 file tuned to use 8K clusters – matching our 8K recordsize, and the 8K underlying hardware blocksize of the Samsung 850 Pro drives in our vdev – produced tremendously better results. Please use our Discord server instead of supporting a company that acts against its users and unpaid moderators. Thus, lets understand what they are and their differences. However, if you convert it to a raw image, then you won't have to worry about it growing. Spent the whole day trying to fix the lag issue - windows took 4 hrs to instal and 2 minutes to boot, the mouse pointer took 2-4 secs to register movement (It was basically This subreddit has gone Restricted and reference-only as part of a mass protest against Reddit's recent API changes, which break third-party apps and moderation tools. After the deployment of QEMU-img, it is only one command KVM is open source software that enables you to run virtual machines at near physical hardware speed on native linux machines. Open comment sort options. Presumably your disk image format also effects speed --- i. : zfs (reddit. qemu-img create -f qcow2 -o compression_type=zstd . Otherwise you may regain much less than you'd expect, because -S relies on consecutive runs of zeroed-out sectors. Reply reply TrustyworthyAdult • This subreddit has gone Restricted and reference-only as part of a mass protest against Reddit's recent API changes, which break third Get the Reddit app Scan this QR code to download the app now The windows guest feels better/more responsive compared to the KVM passthrough. Proxmox Virtual Environment and when should we use qcow2 ? I'm using KVM to all VMs . You might need to disable the USB tablet option else mouse cursor won't work and the vga device must be set to virtio. You can't beat a nice, wide raidz(2|3) stripe for storage efficiency, sequential read/write performance and guaranteed protection from multiple disk failures. Create a new VM Enter the Proxmox GUI and Create a new VM Set up as you usually would, but remove all SCSI devices. file? apart from all the corruption bugs, qcow2 is pretty slow compared to raw, and all you gain is snapshots that nobody uses as we're all Hello, guildem: code blocks using backticks (```) don't work on all versions of Reddit! Some users see this / this instead. qm set {VMID} --scsi0 local-lvm:0,import-from=local:0/debian-12 qm import disk 201 vmbk. img image If insisting on using qcow2, you should use qemu-img create -o preallocation=falloc,nocow=on. I would definitely verify that's the command before you hit it. The ZVOL is sparse with primarycache=metadata and volblocksize=32k. Now the main problem I have is how/where is it going to pull vmbk. bin to . I wanted to get feedback on that. qcow2 files on plain datasets? And: qcow2 is the file system from KVM itself and vmdk from VMware. I'm trying to boot a raw disk image that was converted from qcow2. But maybe it'll work with "raw" files as a storage. com) Seeking advice for zfs layout for mixed workstation/gaming workload. if you really must convert it, then as per the manpage: qemu-img convert -f raw dd. Qcow2. you have to mount qcow2 image at a Network Block Device before copy to physical hard disk via dd. But: If you create a 100 GB disk for your VM, the raw-file does use 100 GB on your hostsystem. I wanted to test our qcow2 versus raw image format performance, so I created two virtual machines on my hypervisor, below are the results I encountered. I cannot explain why NFS has similar results I'm attempting to migrate from a combination of Hyper-V, Virtualbox and KVM hosts to a single oVirt cluster. Under Linux KVM, QCOW2 can be worth using despite sometimes lower performance than RAW files, because it enables QEMU-specific features, including VM hibernation. For immediate help and problem solving ZFS SSD Benchmark: RAW IMAGE vs QCOW2 vs ZVOL for KVM. 2 - using default proxmox zfs volume backed storage, this one I think will be worse but I want peace of mind to see if I am right and if the guys on reddit are right. RAW is equivalent to VMDK eager zeroed flat file. Size difference KVM machine qcow2 . Optional zstd compression for qcow2 (enable with compression_type=zstd as a creation option) . I don't have a problem converting the different formats to raw or qcow2. com) ZVOL vs QCOW2 with KVM – JRS Systems: the Get the Reddit app Scan this QR code to download the app now. I downloaded the KVM appliance from Dell (qcow2), but when I perform a storage migration from the Proxmox GUI from local storage to my shared storage OR perform a qemu-img convert from qcow2 to raw, the raw size is like 211 GB. Qcow2 for features. It's interesting qcow2 is the same speed in random as raw image file or raw on partition. Proxmox will put the disk image in the ZFS, converting it in raw format. raw files as oppose to qcow2 when the storage is cow I converted it to raw and it fixed it. Over time, MAME (originally stood for Multiple Arcade Machine Emulator) absorbed the sister-project MESS (Multi Emulator Super System), so MAME now documents a wide variety of (mostly vintage) computers, video game consoles and calculators, in addition to the arcade Get the Reddit app Scan this QR code to download the app now. You can use LUKS (dmcrypt) on top for data-at-rest encryption and point KVM at the mapped dmcrypt device while running the VM. I want to keep using btrfs as the file system on which these images Yes, this is how you use zvol with KVM. vhdx mercenary_sysadmin is correct about mirrored vdev's being better They're better for most use cases, but they're not better for all of them. practicalzfs. KVM cashe options might effect speed too. So use fallocate and also set nocow attr. IDE -- Local-LVM vs CIFS/SMB vs NFS SATA -- Local-LVM vs CIFS/SMB vs NFS VirtIO -- Local-LVM vs CIFS/SMB vs NFS VirtIO SCSI -- Local-LVM vs CIFS/SMB vs NFS. I tried changing the qcow2 to raw, but surprisingly I don't see a difference in speed. And if you are really concerned about performance then raw isn't what you want to use either - at least free up a partition or if you're running qemu-kvm then just use the raw disk format, without converting it. Summary: Raw vs. com with the ZFS community as well. I didnt use qcow2 as the benefits of it vs raw image are provided by zfs. raw, then import the image in virt-manager. qcow2 vs. You'd use that with the -o parameter of the create command, e. network not routed) The more Get the Reddit app Scan this QR code to download the app now. I'm probably going to use an NVME SSD with a ZFS dataset and Windows installed on an image file (raw or QCOW2). bin file is actually a qcow2 image so you'll need to run it with qemu. I have done KVM performance for some years - are there differences yes. qcow2 newfile. e. Lvm direct or raw/qcow2? I'm pretty sure mounting the lvm directly in the KVM is faster. , recordsize for ZFS, cluster_size for QCOW2, allocation unit size for NTFS) for a Windows 10 Guest VM running on top of ZFS + KVM. . However, the choice of format under different conditions can vary from hosting companies and can be changed according to the After successful install W11 (22H2 and 23H2 isos), on first or second reboot, vm stuck in "Preparing Automatic Repair". Researching the best way of hosting my vm images results in "skip everything and use raw", "use zfs with zvol" or "use qcow2 and disable cow inside btrfs for those image files" or "use qcow2 and disable cow inside qcow2". This is (was?) how the qcow2 images are configured. qcow2, so I'll probably switch that up the next time I make a VM and just resize the . If you're using ZFS, I'd definitely use a ZVOL, which appears to qemu as a raw block device, but gets you all the ZFS checksumming and compression The choice between base image + qcow2 overlay vs multiple full copies depends on your priority: For absolute performance, use fallocated Proxmox Docs suggests using Raw format for Windows VMs, because of the big performance gain. qcow2 seems to have performance issues vs raw. For some more considerations as to the file system/LVM etc. It's a bit annoying, but then your code blocks are properly formatted for everyone. For KVM VMs I found LVM+thinpool a good choice. Long story short (almost works) qemu-system-x86_64 -drive format=raw,file=reven_recovery_stable. Were all those settings really in place before the VM was started? Maybe try the installer on the virtio ISO to install all drivers. Should LVM be used for the partitions when creating VM images (e. You can then use this as the disk for a VM created in QEMU/KVM. . Set BIOS to UEFI! Import the disk SSH into your Proxmox instance Having done several ProxMox (KVM-ish) to VMware migrations, my choice of action is (quite low downtime and relatively fast): Create a shared NFS and map it to VMware Make sure that VM can boot VMware storage controllers (required drivers in kernel or initrd) Convert QCOW2 to RAW on NFS. Conclusion: In my scenario CIFS/SMB perfomance better and more reliable when using the Write Back cache and the VirtIO SCSI storage controller. So I have this KVM machine using qcow2 as image disk. I can start in safe mode with networking and no errors found, but unable to start normally. Oct 16, 2015 1,913 402 153 Chatsworth, CA LinkedIn Reddit Email Share Link. To I want to try the qcow2 image format but don't want inferior i/o compared to raw image. raw). When you convert it from img to qcow2 it should convert it to a thin (or sparse) disk removing all the unused space. Then dd to zvol. Can't run converted qcow2 to raw disk image, Centos 7 - dracut errors . However you might loose the copy-on-write feature and the file will be raw sized (not sure). With the tuned qcow2, we more than tripled the performance of the zvol – going from 50. I have Hugepages enabled and cpus pinned (according to the die topology, tried different configurations and weirdly did not see any significant performance differences) and isolated (via systemd). Top. To fix this, indent every line with 4 spaces instead. I just noticed, you seem to not have virtio-net enabled for your network device. alexskysilk Distinguished Member. I would not format the original Windows disk until you have succeeded in this process. New. Advantage of qcow2 vs raw or physical disk is e. The slightly tricky part is creating emulated physical hardware compatible with the VM. Yes it does, re-read the first sentence: When mixing ZFS and KVM, should you put your virtual machine images on ZVOLs, or on . I've read the post about qcow2 vs zvol but because libvirts ZFS storage backend allocates ZVOLs, I decided to first play around with ZVOLs for a bit more. Another great alternative is Starwinds VSAN, which can convert QCOW2 or RAW images directly yo Hyper-V host. this article doesnt show the qcow2 hosted on a zfs filesystem. If you use your physical disk the only good backup imo is a raw copy onto another disk or into a file. *Because I'm using PCI passthrough via OVMF, I cannot make use of snapshots via qcow2. Copy it to your Proxmox server (you can do this with scp) Expand the xz file with unxz. You can read more about these changes here It's mentioned the changelog here: . By the way, write speed may not be representative because often KVM uses writeback cashing by default and will report write is complete as soon as the data is present in the host page cache. Proxmox VE: Installation KVM is open source software that enables you to run virtual machines at near physical hardware speed on native linux machines. as qcow2 is a simple iys easier to manage it. You must consider these factors when choosing between them. snapshot support (or backup) and thin provisioning. Some benchmarks show a large difference in performance and others show less of a gap (though the consensus is RAW > qcow2, which makes sense since qcow2 has an extra layer). Totally browsable but not natively supported in the KVM on unRAID. Hypervisor: CentOS 6. Members Online in the same boat downloaded the qcow2 image so I can test why something is breaking on the headless servers , and after converting for virtualbox it has a password qemu-img convert -f qcow2 debian-11-generic-amd64. But the answer isn't that easy. If you do mean LVM, then yes there's a performance difference. Er, do you mean "vs. Both Raw and Qcow2 are popular image formats in the KVM environment. g. or if you are using hyper-v (then you're in the wrong sub!) it would be as per your instructions: qemu-img convert dd. For immediate help and problem solving, please join us at https://discourse. Thread starter masterdaweb; Start date May 10, 2017; Forums. Raw VS qcow2. E: important info though. I would advise the exact opposite. I'm converting the VM from KVM qcow2 to RAW on Proxmox. I don't really like raw images though. Discussion Hi all, (which has less overhead than qcow2 disks) and handles it in an easy way. I'd probably shove it on an LVM and pass through a device instead of a raw disk, but that's just me about qcow2 format, I'm adding to my linux KVM VM a physical disk to install (with an empty Btrfs partition) a bitcoin full node, is this format qcow2 readable in other PC with no KVM installed? Damaged RAW file system in new NVME 2TB? comments. Just try it and get your own impressions. This subreddit has gone Restricted and reference-only as part of a mass I use btrfs for my newly created homelab where I want to host some vms with qemu/kvm. Even if I do continue with qcow2 I'd like to switch sector sizes on the new host using the same qcow2s if possible. btrfs at host, qcow2 for vm disk This benchmark show's the performance of a zfs pool providing storage to a kvm virtual machine with three different formats: raw image files on plain dataset qcow2 image file on plain dataset Raw, because qcow adds overhead. bin -m 4G -enable-kvm -smp 2 -vga virtio -usb -device nec-usb-xhci,id=xhci -global Fastest is to pass through an nvme as a PCI device, not an SSD which would be just a raw disk and still be handled by linux, not much faster than lvm. If that's not it then I don't know either. qemu-system-x86_64 -m 24G -machine type=q35,accel=hvf -smp maxcpus=8,cpus=7,cores=4,threads=2 -cpu Nehalem hardDiskImage. It is writing a new file. P. r/OpenMediaVault. The disk image conversion is very simple. RAW vs Qcow2 Images. Or check it out in the app stores working in it 30-80 hours per week. So a copy-on-write implementation such as QCOW2 allocates more space because it sees you writing to unallocated disk space. qcow2 storage qm import disk <vmid> <vm image> <pool> I assume that I dont need to do a format or conversation since it is in qcow2 or I can use RAW to. The qcow2 size is like 2 GB. Many production VMs are made of qcow2 disks on NFS, this is very reliable, even more with direct access between hypervisors and NFS server (i. I would advise against using qcow2 containers on ZFS. Those 'filesystems' are so complex and unstable that one of the many features will trigger corruption. This leaves us with RAW files on OpenZFS datasets, vs OpenZFS ZVOLs passed directly down to the VM as block devices (on Linux) or character devices (on FreeBSD). Still I wanted to mention the LVM option since it allows snapshots etc. Convert Raw/Qcow2 disk image format. Or check it out in the app stores the wrong way. The needed conversion tool is QEMU-img. QCOW2 are easier to provision, you don't have to worry about refreservation keeping you from taking snapshots, they're not significantly more difficult to mount offline (modprobe nbd ; qemu-nbd -c /dev/nbd0 /path/to/image. Here is how you mount qcow disk image at a Network Block Device(/dev/nbd1): modprobe nbd qemu-nbd -c /dev/nbd1 '. Considering the comparisons in this section, it seems that images with Qcow2 format have more advantages compared to RAW images. Qcow2 or Raw? Which do you use? Which is a better option? : VFIO (reddit. raw file whenever I need more space. I tried a The best I've managed to achieve with this is to use a raw image and set both the recordsize for ZFS to 8K, and format NTFS with 8K blocks (you can do this for the Windows install partition Raw vs Qcow2: Qemu/KVM provides support for various image formats. If you check with filefrag you'll see the fragmentation explode over time with a non-fallocated COW file on Btrfs. But they do beat it at random io and ease of expansion. raw is faster than . qemu-img convert -p -f qcow2 -O qcow2 -S 0 original-vm-disk. (by importing them as if they were raw disk images). And although I don’t typically do hugely disk-intensive stuff (except on a dedicated real windows machine), I’ve never I once tried btrfs and zfs. So I don't think this is ever going to be relaxed. /your qcow2 image' Here is how you copy qcow2 image to physical hard disk: I have tried various tricks from reddit. Raw image and Qcow2 image can be converted into the other image format. I have recently seen it incorrect as far as 20% as the whole volume size. , KVM images)? It seems like it adds complexity if you want to, say, mount a qcow2 image in the host if the image has LVM partitions. Be aware of multiple cows e. Some years on a totally non-optimized virtual file, sometimes on raw whole ssds. Forums. I'd like to know if is there an advantage RAW on a separate dataset with compression? Or are there better options? I'm villing to trade som of the performance for beeing able to use a "sparse" image, or is this idea qcow2 has better snapshot support (with redirect-on-write and such), raw is of course just storing the bytes so doesn't have any special support at all. Reply More posts you You could dd your physical disk into a raw image and boot it. qcow2 ; mount -oro /mnt/image /dev/nbd0 or similar); and probably the most importantly, filling the underlying storage beneath a qcow2 won't crash the The answer is it kind of depends on how those KVM machines are managed as well as how the backend VM disk (ie. Best. Same How are you looking at the qcow2 file to determine the size? If you are checking with ls, try instead with du -hs /var/kvm/Win2019. 5 MB/sec (zvol) to 170 MB/sec (8K tuned qcow2)! FreeBSD. raw or qcow2, etc. The friend who was dealing with them got the opportunity to temporarily blow one away and put raw Ubuntu + KVM in its place, and even on exactly the same hardware both ways, confirmed a MAJOR performance difference in Ubuntu's favor. I found a nice site, describing qcow2: By the way: You also can use raw - this is the fastest of all three formats. Share Sort by: Best. Personally I use qcow2 files on an nvme for flexibility. This benchmark show’s the performance of a zfs pool providing storage to a kvm virtual machine with three different formats: raw image files on plain dataset qcow2 image file on plain dataset zvol The . What I'm trying to do is to import in Proxmox virtual machines that I now use under KVM in server no. xaibweedartmgmxvsyxvpfalwnhwhhdlgaduqudnadivgpzhix