Hello, Sorry if I allow myself to intervene, but before updating to version 8.0U2 you made sure that the context of your infrastructure was in some way supported / supportable, especially when thi...
See more...
Hello, Sorry if I allow myself to intervene, but before updating to version 8.0U2 you made sure that the context of your infrastructure was in some way supported / supportable, especially when things like vSAN are at play. Regards, Ferdinando
After updating esxi to 8.0.2 (hp g9 cluster with 5 host ) 2 host down and some vm inaccessible (lost my file) I put the log and pic erro here any body can help me (apologize for my english) ...
See more...
After updating esxi to 8.0.2 (hp g9 cluster with 5 host ) 2 host down and some vm inaccessible (lost my file) I put the log and pic erro here any body can help me (apologize for my english)
@efpbe, Are you sure that is the 'vSAN Default Storage Policy' applied to that object?(I think it is possible to still have multiple of these in SPBM inventory) Asking as that object does not hav...
See more...
@efpbe, Are you sure that is the 'vSAN Default Storage Policy' applied to that object?(I think it is possible to still have multiple of these in SPBM inventory) Asking as that object does not have 'No preference' rule on it - you should change it to 'No preference', apply it to all objects and validate this (from 'esxli vsan debug object list --all' output) has been applied to everything (not just what is registered in inventory!) that has 'Dedup&Compression' rule, then you should be able to reformat to Compression only (and change policy to that also, but not mandatory).
@lElOUCHE_79, Objects (e.g. .vmdk, .vswp, namespaces) stored on vsanDatastore with default FTT=1, RAID1 policy are basically stored as two complete copies (e.g. so that one can be unavailable but th...
See more...
@lElOUCHE_79, Objects (e.g. .vmdk, .vswp, namespaces) stored on vsanDatastore with default FTT=1, RAID1 policy are basically stored as two complete copies (e.g. so that one can be unavailable but the data still accessible from the other replica). Sorry but I am unsure what you mean regarding source etc. - are you referring to what happens when migrating objects to vsanDatastore? This article outlines the basic concepts well: https://core.vmware.com/blog/vsan-objects-and-components-revisited
Hello I know it’s ambiguous, but I’ll give it a shot. Regarding the default vSAN policy, I was told that vSAN objects are written in duplicate. I wanted to understand if vSAN objects are writte...
See more...
Hello I know it’s ambiguous, but I’ll give it a shot. Regarding the default vSAN policy, I was told that vSAN objects are written in duplicate. I wanted to understand if vSAN objects are written in duplicate, that means the source, and the copies are calculated as double, or is it written in duplicate regardless of the source? is there any documentation that can help me to understand better vSAN objects.
So it took me a minute to realize you were specifically talking about applying a vSAN license PAC to ESXi hosts. Got it. I have only encountered that issue when upgrading from trial to full licen...
See more...
So it took me a minute to realize you were specifically talking about applying a vSAN license PAC to ESXi hosts. Got it. I have only encountered that issue when upgrading from trial to full licenses in a lab environment. vSAN licenses are per core, and definitely vary per version lately. But a PAC shouldn't run into those issues. It's possible that the vendor used the wrong PAC, or more likely an outdated set of codes. VMware has gotten tighter about license keys ever since they killed off the 'engineering' licenses that used to be good for 90 days at a stretch. That's where I ran into things, when that engineering (trial) license expired we had to work with the vendor to secure the correct PAC and get our lab gear fully licensed (because we were promoting a prototype into production as a solution). It sounds like you're stuck waiting on your OEM/vendor to resolve the issue, but if your purchase included any VMware 'pro' credits (if they still do that) it might be worth activating that to get a VMware engineer on your side to help address the issue outside of the normal ticket process.
Hi, I am using a function QueryPhysicalVsanDisks in order to get the vSAN health metrics. Out of the retrieved metrics, one specific metric latencyDeviation doesnt convey the meaning or the metric ...
See more...
Hi, I am using a function QueryPhysicalVsanDisks in order to get the vSAN health metrics. Out of the retrieved metrics, one specific metric latencyDeviation doesnt convey the meaning or the metric unit(millisec or otherwise). As this metric does sound important, it would be great if some one could throw light! Any suggestion, feedback is appreciated! Thanks in advance, Niranjan
Hi guys, An OEM provided us with some "Advanced to Enterprise" licenses for our ESXi hosts for stretched cluster purposes. Four of our hosts are on the Advanced licenses, which was why the OEM sold...
See more...
Hi guys, An OEM provided us with some "Advanced to Enterprise" licenses for our ESXi hosts for stretched cluster purposes. Four of our hosts are on the Advanced licenses, which was why the OEM sold us the licenses to upgrade ours to Enterprise version. The problem is our OEM was not able to put the upgrade key (or PAC key) into VMware's portal - it keeps saying invalid. They couldn't fix the issue, they raised a ticket to VMware and everything is pretty much on hold. Has anyone encountered a similar issue before, if so, what fixes or checks would you recommend?
Dell got back to me and basically told me the thing we already knew. "The hardware is performing as expected. While in the Support Live Image, all the drives observes extremely low latency times on ...
See more...
Dell got back to me and basically told me the thing we already knew. "The hardware is performing as expected. While in the Support Live Image, all the drives observes extremely low latency times on the tests you performed, and the overall performance was very good and pretty consistent. All of this does point toward ESXi/VMware being the bottleneck, unfortunately." I updated my ticket with Vmware but if I don't hear back from them I am unsure what to do next.
Yeah i absolutely agree that it can be a problem with ESXi and PowerEdge x60 PowerEdge Systems. We didnt saw the Latency problems when Local Microsoft Windows with latest Driver was installed. The Pr...
See more...
Yeah i absolutely agree that it can be a problem with ESXi and PowerEdge x60 PowerEdge Systems. We didnt saw the Latency problems when Local Microsoft Windows with latest Driver was installed. The Problem only occured on VMware ESXi with PCIe or non PCIe Passthrough. Maybe there is something wrong with PCIe Bus or something else. Problem exists only on one Connector on Mainboard with VMware.
Yes, you're right ! I think it's because the vSAN Default Storage Policy was in this status : but, when I put the cluster to "Compression only" and change the default policy to "Compr...
See more...
Yes, you're right ! I think it's because the vSAN Default Storage Policy was in this status : but, when I put the cluster to "Compression only" and change the default policy to "Compression only" also, i've this error : I'm not aware of wich best practices must I follow to migrate from deduplication&compression to compression only
I increased the jobs to 16 this is what i got back. I also ran this to measure random read/write performance. fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filen...
See more...
I increased the jobs to 16 this is what i got back. I also ran this to measure random read/write performance. fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=sbd --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 Results The numbers look fine to me unless I am reading something wrong. Also I would note this is not a test done with esxi installed, I am using a dell image that is running Rocky Linux 8.8 with all the appropriate drivers and test utilites on it. After running all these test I get the feeling this is not specifically hardware related as much as it is esx related with drivers or something.
So, unless I am mistaken- those results look good? Those are based on being plugged into which backplane connector? What does it look like if you set numjobs= to a higher value?
I think 128k Block SIze is too small to represent the issue. Can you please check how the results are with 256K, 512K and 1MB Block Sizes ? We saw the problem with Block Sizes bigger than 512K.