Hi there, log in via DCUI, enable ESXi Shell, log in as root and enable Promiscuous Mode on your vSwitch with the following manual: vSphere Documentation Center Good luck!
Hello, I did a quick debug of the MCE you got (0xf200001044100e0f) and the debug output is: Observer: Generic while processing Generic Error during Other transaction on Generic Cache. Reque...
See more...
Hello, I did a quick debug of the MCE you got (0xf200001044100e0f) and the debug output is: Observer: Generic while processing Generic Error during Other transaction on Generic Cache. Request Did Not Time Out. It is highly probable that your second processor's (PCPU #2) cache is corrupt. You can also see that in error stack that it always after CPU Scheduler trying to allocate resources. Contact your Hardware vendor and have them replace the CPU, then run some extended stress testing on the host. You can also check by stress testing it in its current state and waiting for it to fail.
Hi guys, this seems to be related to Qlogic Controller (qla2 in the last messages in crash stack). Did you update your device's firmware with the latest HP SPP 2015.04.0 ? If that does not hel...
See more...
Hi guys, this seems to be related to Qlogic Controller (qla2 in the last messages in crash stack). Did you update your device's firmware with the latest HP SPP 2015.04.0 ? If that does not help you could try loading ESXi 5.5 Update 1 .for example and then try updating it to Update 2.
Hi there, general rule of thumb is to have your ESXi host to have double the resources of your largest VM. This means your ESXi hosts should be scaled ideally to 24 vCPUs (2 x 12 cores or 4x 6...
See more...
Hi there, general rule of thumb is to have your ESXi host to have double the resources of your largest VM. This means your ESXi hosts should be scaled ideally to 24 vCPUs (2 x 12 cores or 4x 6 Cores - but there is NUMA disparity, performance implications, etc.) and 192 GB RAM. Of course depending on the budget and another factors that be, this might not always be the case. But hosts with at least 1,5 the power of the largest server should still be supplied to prevent any bottleneck. You can make an anti-affinity DRS rule for the larger VMs to be separated from each other (Keep Virtual Machines apart rule) so that two of them don't choke the whole ESXi host and also the smaller VMs running alongside them. If the DRS is failing, I suggest beefing up your ESXi hosts with dedicated NICs for vMotion. Disproportionate ESXi hosts wouldn't help in the cluster. Maybe an option would be for you to create one cluster for "Monster VMs" (powerhouses) and one for "Standard DBs" (less powerful ESXi hosts) As for scaling up versus scaling out, keep in mind that these are database servers, so it depends on what you need to avoid when the ESXi host fails - having many VMs down at once or just a few with a quick fallback?
Hi there, can you post your vmkernel.log from /var/log and vmware.log from /vmfs/volumes/<datastore>/<vmdir> ? There's something bound to be there. Thanks in advance
Yes, Once you set things in motion, the blocks are starting to be merged and there is no way back. This also goes for "Consolidate Virtual Disks" options that searches for any stale snapshots and...
See more...
Yes, Once you set things in motion, the blocks are starting to be merged and there is no way back. This also goes for "Consolidate Virtual Disks" options that searches for any stale snapshots and merges them to the base disk.
Beware mate, the numa.vcpu.min=4 dictates greater than, not equal or greater I was confused by this the first time as well, wondering why vNUMA would kick in after 9 vCPUs were assigned and...
See more...
Beware mate, the numa.vcpu.min=4 dictates greater than, not equal or greater I was confused by this the first time as well, wondering why vNUMA would kick in after 9 vCPUs were assigned and not 8 as the logic would dictate
Hi there, first if you would like to have your vCPUs set to the acutal physical cores, you would use 0,2,8,10 (the "even" numbers including 0 are the true physical cores, while the odd number...
See more...
Hi there, first if you would like to have your vCPUs set to the acutal physical cores, you would use 0,2,8,10 (the "even" numbers including 0 are the true physical cores, while the odd numbers excluding 0 are hyper-threaded cores). This is an interesting experiment you are proposing, however what real-world usage this would have? ESXi always tries to keep your VMs on one single Physical CPU where possible and starts to span across NUMA when the vCPU count is higher than the Physical CPU count of one node. Perhaps this applies in this example as well. You also could try changing the vNuma to start up at >3 vCPUs instead of the standard >8 vCPUs. Is your vCPU set to 2 sockets x 2 cores?
You say that the server you want to apply the FT to is CPU intensive. This also could mean that there is frequent access to RAM. Standard RAM speeds are in tens of gigabytes per seconds while hav...
See more...
You say that the server you want to apply the FT to is CPU intensive. This also could mean that there is frequent access to RAM. Standard RAM speeds are in tens of gigabytes per seconds while having a latency of few nanosecond. Transferring the memory with ~100MB/s will not cut it. You will really need to get 10 GbE and test it out for yourself to see that the throughput matters. I also think that the traffic is not compressed in any way. Good luck!
You have initiated an operation that commits all the delta disk's blocks to the base disk - this means canceling this operation at any point could leave the base disk in an inconsistent state, wh...
See more...
You have initiated an operation that commits all the delta disk's blocks to the base disk - this means canceling this operation at any point could leave the base disk in an inconsistent state, which would result in a VM that could have its filesystem corrupt, refuse to boot, etc. Hopefully this isn't anything critical - I wish you better luck next time
Back when I was just getting started with virtualization, I ended up the same way - A classic SATA Hard Drive will not provide you with enough IOPS to run more than one VM actively at the same ti...
See more...
Back when I was just getting started with virtualization, I ended up the same way - A classic SATA Hard Drive will not provide you with enough IOPS to run more than one VM actively at the same time - the ~100 IO just don't cut it. Furthermore, vCenter is quite an IO intensive little beast - so you will have a huge benefit from installing this VM to the SSD. Better yet, try having all your VMs there in Thin Provisioned format, and keeping only the most idle VMs (eg. domain controller) on your spindled hard-drive. Good luck!
One thing to beware of, is that VMware does not recommend using PVSCSI while using Directly Attached Storage (virtual disks on RAID Controllers). Otherwise, good luck!
Hello, take a look at my blog post to learn a bit more on how does the HT benefit ESXi: https://vmxp.wordpress.com/2015/01/08/hyperthreading-what-is-it-and-does-it-benefit-esxi/ Basically, ...
See more...
Hello, take a look at my blog post to learn a bit more on how does the HT benefit ESXi: https://vmxp.wordpress.com/2015/01/08/hyperthreading-what-is-it-and-does-it-benefit-esxi/ Basically, it helps you a lot with spreading out the workload for the hypervisor.
Another thing occurred to me - is the /sysroot folder even present in the filesystem? Can you please check? Perhaps it got "lost" somehow and you can't mount in a non-existent area - this must po...
See more...
Another thing occurred to me - is the /sysroot folder even present in the filesystem? Can you please check? Perhaps it got "lost" somehow and you can't mount in a non-existent area - this must point to an empty directory. Anyways, everything looks fine here. You could try replacing the /dev/sda2 in /etc/fstab with its UUID just to be really sure that the right drive is referenced on the boot-up. Make sure to backup everything before you do so and double check that the UUID is right. How you do is that you would copy the whole uuid so the line with /dev/sda would instead look like this: UUID=54e03a9e-c4e6-4647-9173-088b32729888 / ext3 defaults 0 1 Fingers crossed and good luck!
Hmmm okay then this is out of the question. Have you tried connecting to the VRTX's switch modules and observing the output while powering on your second node? This could give away valuable infor...
See more...
Hmmm okay then this is out of the question. Have you tried connecting to the VRTX's switch modules and observing the output while powering on your second node? This could give away valuable information. If you are getting interface down messages then it'd be good to contact the vendor to have it fixed.
Hello, can you please post vpxa.log from vCenter Server, hostd.log and vmkernel.log from the ESXi host right after you have tried to connect the ESXi hosts to the vCenter Server?