JarryG's Accepted Solutions

I have never seen any definitive answer to this question. Probably because there is no definite answer to this question. In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ] You can wi... See more...
I have never seen any definitive answer to this question. Probably because there is no definite answer to this question. In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ] You can win more performance when you switch your old sata/3gbit controller for sata/6gbit one. VMFS5 is using a 1MB block size True, but vmfs5 is also using 8kB sub-blocks. Moreover, very small files (<1kB) can be stored in meta-data. I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore Where did you get idea *this* helps to avoid fragmentation??? I believe this would be the optimal situation in regards the datastore? I do not believe this would be optimal, and I'm not sure there is some generally optimal value. Setting strip-size equal to vmfs5 block-size might not be the best option, because it is quite small. Moreover, there are other things you should consider, i.e.: - hard-drive sector size (mostly 512B or 4kB) - ssd read/erase block size (highly vendor-specific) - VM filesystem sector/block size (depends on OS) - VM filesystem load (depends on what your VM is doing) - size of your raid-controller's on-board cache - type of raid-array (0,1,10,5,6,...) - etc, etc IMHO, if you do not have time for testing, just stick with default value. With blind shot you can just make things worse...
ESXi-host does not require any host-cache. VMs might. How much? Depends on how much you over-commit host-RAM. If you do not over-commit RAM, you do not need host-cache. And if you reserve all vRA... See more...
ESXi-host does not require any host-cache. VMs might. How much? Depends on how much you over-commit host-RAM. If you do not over-commit RAM, you do not need host-cache. And if you reserve all vRAM to VMs, you do not even need virtual machine swap file at all (with 100% vRAM reservation it will be not created). In the worst case (when you do over-commit RAM, no sharing/compression/balooning works and all VMs use all vRAM you assigned to them) you might need SSD host-cache of the size: "summary_of_vRAM_assigned_to_all_VMs" - "host_RAM_available_for_VMs" But if your SSD host-cache is smaller, it will be used anyway. And when it is full, "normal" swapping will be used (with VM swap files).
ESXi does not remove that file on boot-up, it simply does not save it. ESXi runs from memory. So if you created some file (i.e. /.profile) it is only in "memory-disk", not in disk-image which is ... See more...
ESXi does not remove that file on boot-up, it simply does not save it. ESXi runs from memory. So if you created some file (i.e. /.profile) it is only in "memory-disk", not in disk-image which is loaded again at the next boot-up. Either create custom vib and install it as every other, or use rc.local which is persistent (any changes you make to this file survive boot-up). You can create & save that file somewhere else and use rc.local to copy it to /, or use rc.local with shell commands to create .profile at every boot-up. Wait a minute, you are using ESXi 6.0, right? I'm not sure if there is /etc/rc.local, but it used to be in 5.0/5.5...
First of all, your 5.1 host is terribly outdated! You are running it at "update1" level. Why? Since then there were many patches, so grab the latest one (they are cumulative) and update ESXi to t... See more...
First of all, your 5.1 host is terribly outdated! You are running it at "update1" level. Why? Since then there were many patches, so grab the latest one (they are cumulative) and update ESXi to the latest level. Second, in your PSOD I see "E1000". There was some problem with this NIC, this has been fixed later "update2" patch. So one more reason to update your ESXi-host to the latest level! http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2059053
Of course, it can not recognize this "array", because ESXi does NOT support software-/fake-/bios-/pseudo-raid (that's the "raid" you have on your motherboard). Only true hardware raid controllers... See more...
Of course, it can not recognize this "array", because ESXi does NOT support software-/fake-/bios-/pseudo-raid (that's the "raid" you have on your motherboard). Only true hardware raid controllers...
If you want to get rid of physical switch, simply do not use it. Use eth-cable to connect pNICs of those two ESXi-hosts directly. If your NICs have auto-mdi(x) capability, you do not need anythin... See more...
If you want to get rid of physical switch, simply do not use it. Use eth-cable to connect pNICs of those two ESXi-hosts directly. If your NICs have auto-mdi(x) capability, you do not need anything special. Some older NICs might still require crossover ethernet cable, but even this is not necessary with new hardware...
I'm using M5016 (aka LSI-9266/9271) but it might be similar as for M5015 (LSI-9260): I updated bios/driver/smis/storcli all to the latest version, and after restarting ESXi the whole "Storage"... See more...
I'm using M5016 (aka LSI-9266/9271) but it might be similar as for M5015 (LSI-9260): I updated bios/driver/smis/storcli all to the latest version, and after restarting ESXi the whole "Storage" section dissapeared from vSphere Client -> Configuration -> Health Status. But despite of that, "storcli /c0 show all" still could retrieve complete info about controller, arrays and disks. Then I downgraded smis-provider from: VMW-ESX-5.x.0-lsiprovider-500.04.V0.55-0006-2619867.zip one version back to: VMW-ESX-5.x.0-lsiprovider-500.04.V0.54-0004-2395881.zip I restarted ESXi, and Storage health status was back again, with complete information (arrays, disks, battery, etc)! So I suppose there is some problem with the latest lsi smis-provider. Try to use older smis-provider and see if it works for you...
Try this link (you have to log in): Download VMware vSphere Hypervisor for Free
I'm affraid there is nothing like "best strip size" for VM-datastore. It all depends on usage scenario (many small files, or less big files? read/write ratio?, etc.). There are though some other ... See more...
I'm affraid there is nothing like "best strip size" for VM-datastore. It all depends on usage scenario (many small files, or less big files? read/write ratio?, etc.). There are though some other values you should consider: - disk sector-size: 512B, 4kB (can be variable for sas-drives) - primary filesystem block size: for vmfs5 it is 1MB (with sub-block 8kB) - secondary filesystem block size (that of VMs): depends on filesystem used (i.e. btrfs has default blocksize 16kB, ntfs 4kB, etc.) - ssd erase-block size: this differs with vendors, mostly 4MB, but I have seen values between 1MB  and 8MB - cache-size of your raid-controller (can be anything between zero and a few GB) If you do not have time for testing, just pick default value raid-controller offers. If that is what screenshot shows, I think it is quite good default choice and it makes sense to me (full strip-size equals to vmfs5 block-size, 1MB). It does not make sense to go under this value, but I would increase it if you store dominantly large files...
I opened this case with LSI-support and basically got this info: SMIS-provider will be released "in about one month". I have impression LSI simply "missed" release of ESXi 6.0. Support-technician... See more...
I opened this case with LSI-support and basically got this info: SMIS-provider will be released "in about one month". I have impression LSI simply "missed" release of ESXi 6.0. Support-technician was very surprised when I told him 6.0 was already released about month ago...
C612 is Intel-chipset containing RSTe (rapid storage technology). Supported is motherboard and chipset, even sata-ports, but not this "raid-controller". Basically, it is a fake- (aka software-, b... See more...
C612 is Intel-chipset containing RSTe (rapid storage technology). Supported is motherboard and chipset, even sata-ports, but not this "raid-controller". Basically, it is a fake- (aka software-, bios-, pseudo-) raid, and these are generally not supported by ESXi. Even if you create array, ESXi will see individual disks. Forget this crap, and get good true hardware raid-controller for ESXi, with on-board cache and power-loss protection.
What do you mean by "SMIS would not update"? I updated smis-provider with: # esxcli software vib install -d "/path/VMW-ESX-5.x.0-lsiprovider-500.04.V0.54-0004-offline_bundle-2395881.zip" ...a... See more...
What do you mean by "SMIS would not update"? I updated smis-provider with: # esxcli software vib install -d "/path/VMW-ESX-5.x.0-lsiprovider-500.04.V0.54-0004-offline_bundle-2395881.zip" ...and everything went well. After reboot I can see health-status in hardware tab. BTW you have ESXi 5.5, you are probably using VMW-ESX-5.5.0-lsiprovider-500.04.V0.54-0004-2371726.zip, listed as smis-provider for 5.5. But did you try the other one, VMW-ESX-5.x.0-lsiprovider-500.04.V0.54-0004-2395881.zip, for 5.x? And one more thing: does your controller report its healt-status at least using command-line tool (storcli, megacli)?
You do not have to install any of them to use vmxnet3 adapter. Its driver is part of kernel-tree since 2.6.32 and actively maintained. "Tools" (either Open, or VMware) gives you possibility to sh... See more...
You do not have to install any of them to use vmxnet3 adapter. Its driver is part of kernel-tree since 2.6.32 and actively maintained. "Tools" (either Open, or VMware) gives you possibility to shutdown cleanly VM from ESXi-host. But if all you are looking for is a way to use vmxnet3, you do not need them...
OMG, not again! Zillion times it has been posted here: ESXI does NOT support software-/bios-/fake-raid...
Not sure about that "r1.1b" version, but original X7DAL-E does not have on-board hardware raid controller. What it has is i5000 chipset, which offers software- (aka bios-, fake-, etc) raid. This ... See more...
Not sure about that "r1.1b" version, but original X7DAL-E does not have on-board hardware raid controller. What it has is i5000 chipset, which offers software- (aka bios-, fake-, etc) raid. This is not supported by ESXi (for very good reasons). And even if it were, without cache you'd get terrible performance...
You have to understand that "hyperthreaded core" is not true core. It can do only certain things. It is difficult to assess this performance, but I'd say in common tasks you can not get more than... See more...
You have to understand that "hyperthreaded core" is not true core. It can do only certain things. It is difficult to assess this performance, but I'd say in common tasks you can not get more than ~20% performance from ht-core (compared to the real core). Sometimes more, othertimes none at all. So if you are running cpu-intensive tasks (like video-encoding), it is better to let your VM (with 8 vCPU) to use both pCPUs (4 true cores of each one), even if you loose a few CPU-cycles due to NUMA.
You can find those multiplier in cpu-datasheet. For 2-core load you calculate frequency as: base_frequency_[GHz] + (multiplier * bus_clock_[GHz]) = 2.3 + (13 * 0.1) = 3.6 GHz For 1 or 2... See more...
You can find those multiplier in cpu-datasheet. For 2-core load you calculate frequency as: base_frequency_[GHz] + (multiplier * bus_clock_[GHz]) = 2.3 + (13 * 0.1) = 3.6 GHz For 1 or 2 core load, multiplier is 13 (that number sequence starts with load for all 18 cores, then 17, 16, 15, etc, down to single core). Now in your case, VM has 4 vCPU. You'd expect to use multiplier 10 and get 3.3GHz, but nope! Multiplier 5 is used. Why? Because of the feature called "thermal load ballancing": ESXi "rotates" that 4vCPU load over all pCPUs/pCores, so that all pCPUs/pCores are moderately loaded (therefore in your case multiplier 5 is used). Every modern OS does this (even Windows). Otherwise some portion of CPU-chip would be extremly hot, while other parts were cold. If you had Windows installed directly on physical machine, you could prevent it using "cpu affinity". You can probably do it in ESXi too, but I DO NOT recommend it, unless you are ready to burn/wreck your pCPU. If only 2 of 18 pCores (on each pCPU) are at 100% and remaining close to 0%, thermal stress on the boundary of loaded and non-loaded cores is extremely high. This could really over-stress pCPU up to the point leading to physical damage (even with very good cooling), if you run it for longer time...
Oh now I understand! Separate logical drives are like separate drives for ESXi (and any other OS). So if you pick the right one for ESXi, those two remaining should remain untouched. I did thi... See more...
Oh now I understand! Separate logical drives are like separate drives for ESXi (and any other OS). So if you pick the right one for ESXi, those two remaining should remain untouched. I did this such installation once (two logical drives on single raid-controller) and was not sure what happens, so I backed up all VMs off-site. But only the logical drive which I picked up for ESXi-installation was re-partitioned.
"...is there a way to override the storage/network IO activity check and thus reboot the vm's as soon as the VMware tools stops running on them?..." You can always use some process-supervision... See more...
"...is there a way to override the storage/network IO activity check and thus reboot the vm's as soon as the VMware tools stops running on them?..." You can always use some process-supervision tool inside of VM for this (maybe in addition to VM-monitoring). It acts independently and allows much finer control, i.e. it can try first (re)starting vm-tools before rebooting. Even BSOD/kernel-panic can be handled by VM itself. It can be actually even more robust, not depending on VMware-infrastructure...
You did not write what edition you have, but for example "standard" and "web" editions have maximum 32GB RAM. In order to use more, you have to have HPC/Datacenter/Enterprise edition...