Which nvme performance issues are you refering to?
I must have posted in the wrong thread... new to the interface...
I am talking about ESXi 6.7 -- our DB on NVME drives is 1/10 or less of the performance we get from RAID5 disks.
Can you please provide some details about the HW you're using?Best regards
I'm actually booting ESXi 6.7 off NVMe direct attach on the motherboard and haven't seen any performance issues yet. It's fairly snappy from the looks of things. My beef is with the raid support, but that's for another thread. You might want to check your PCIe cards and processor lanes and make sure you're not sharing the same lanes as your NVMe drive. If you've got the M.2 Drives attached to a PCIe card, check the bifurication settings as well as what processor the slot you're using is assigned to.
We are using HP Proliant Gen 8 Server with 4 lane PCIe x4 daughter board into which Samsung 970Pro Nvme drive is plugged in.
Same setup works perfectly without VMWare ESXi, when we use Centos 7 on bare hardware. This is why we had to discard ESXi from use in production.
But cost considerations and practicalities of using VMWare motivate me to do some more testing with newer patches and reach out to the community.
FYI: I have just upgraded to the latest esxi upgrade of Dec 2019 and our engineers will run the benchmarks again.
We have a setup on this server with RAID5 (4x 10k disks), 1xSSD, RAID1 (2x15k disks) and one 512GB Samsung Nvme drive.
So far we are getting the best DB performance out of RAID5 followed by RAID1 made of 15k HDDs. All disks are HP brand.
Will share here next week some more info.
Just as an experiment, try doing a passthrough of the NVMe card to a VM and do benchmarks there too. Also, with that latest version of ESX, you should be able to add and NVMe Controller and an NVMe drives to a VM (direct) as well as add a scsi disk with a policy (virtual Pmem) to try and get full use out of the NVMe drives.