Here's the vmkernel.log for the machine showing when it was stable right after I turn on 5-6 vms it crashes. If I leave everything and turn on vms 1 by 1 everything stays stable and when I start working on something storage controller seems to crash. So far, I've tried the following solutions but no luck. the log below starts just after I put Load on the Machine and powered up 5-6 vms at the same time I could generate the same result by turning them on one by one and initiating bunch of transfers thru VMS but this was fastest way to replicate the issue. Also, If I stay careful and only transfer 1 thing or do 1 task at a time. it stays stable.
1. update esxi 6.0 (originally installed)
2. upgrade to 6.5, 6.7 (customized drivers)
3. Created NEW VMs and installed new OS on a couple of them to see if it was pulling errors from old vm configs etc.
4. Tried putting the pci e adapter to another port
HP Z600 Dual Socket x5675
96GB RAM / Bios updated to most recent/
**NO stability issues without this IOCrest NVME Controller and if I use those same NVME Modules with giving a separate PCIe Slot and using individual adapter for 1 nvme module . it works pretty fine but I don't get huge speeds compare to this setup. so this is really working good if it stays stable.
Currently installed ESXI 6.7 but have tried ESXI 6.0 , 6.5 as well with same or worst results.
I know this is an old machine but this is my lab environment so any kind of help is highly appreciated
You, my friend...are right! I just did that and I'm so far looking at no problems. the vmkernel log is still showing NMP reset device throttling line every once in a while. But so far it's looking solid. I split 1 nvme on a separate pcie slot and left two nvmes on that adapter. I'm going to keep looking at it if it shows any issues but so far it looks good to me. Do you happen to know the capacity I can introduce thru this NVME Adapter? Let's say if I introduce 2 4tb nvme modules. Would it be fine? Thanks alot again!