JarryG's Posts

NVidia forces users to stick with pro-series GPUs (afaik NVidia did some tweaks in drivers so that consumer-GPUs do not work for pass-through). There are some Quaddro-cards close to 100€ (i.e. K4... See more...
NVidia forces users to stick with pro-series GPUs (afaik NVidia did some tweaks in drivers so that consumer-GPUs do not work for pass-through). There are some Quaddro-cards close to 100€ (i.e. K420), but do not expect too much performance (~300/12gflops single/double)...
Clear, but I was wondering about that "eager zeroed" part...
I have never seen any definitive answer to this question. Probably because there is no definite answer to this question. In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ] You can wi... See more...
I have never seen any definitive answer to this question. Probably because there is no definite answer to this question. In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ] You can win more performance when you switch your old sata/3gbit controller for sata/6gbit one. VMFS5 is using a 1MB block size True, but vmfs5 is also using 8kB sub-blocks. Moreover, very small files (<1kB) can be stored in meta-data. I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore Where did you get idea *this* helps to avoid fragmentation??? I believe this would be the optimal situation in regards the datastore? I do not believe this would be optimal, and I'm not sure there is some generally optimal value. Setting strip-size equal to vmfs5 block-size might not be the best option, because it is quite small. Moreover, there are other things you should consider, i.e.: - hard-drive sector size (mostly 512B or 4kB) - ssd read/erase block size (highly vendor-specific) - VM filesystem sector/block size (depends on OS) - VM filesystem load (depends on what your VM is doing) - size of your raid-controller's on-board cache - type of raid-array (0,1,10,5,6,...) - etc, etc IMHO, if you do not have time for testing, just stick with default value. With blind shot you can just make things worse...
First of all you have to understand that SMART attributes are highly vendor-specific. Each drive manufacturer defines his own set of attributes and threshold values. Moreover, those attributes ar... See more...
First of all you have to understand that SMART attributes are highly vendor-specific. Each drive manufacturer defines his own set of attributes and threshold values. Moreover, those attributes are seldom pure physical values, mostly they are calculated using some formula. Drive Temperature             176    0          176 I'm pretty sure this is not directly drive temperature, because threshold value (the worst one, you should never reach) is zero. In other words, in this case the higher value the better. If this value starts dropping, approaching "0", then you'd have reason to worry. Not now. Similar for all other values, i.e. "reallocated sector count 100" does not mean your drive already reallocated 100 sectors, it is just information you have still plenty of spare sectors (in this case 100%) that can be used for reallocating. Or "power cycle count 100" does not mean you powered on/off your drive 100 times, it means from the number of expected power-cycles your drive did not use much, and still has about 100%. Etc, etc. You can get more accurate representation of smart-values from your drive-vendor (some send you detailed specification if you ask for it). But there is definitely nothing wrong in your smart-output. The other point is a hard-drive might fail even if its smart-values were all perfect just a minute ago...
NVidia restricts using consumer-GPU for passthrough. There are some tweaks and mods that can fool Nvidia-detection by pretending your GTX/GT consumer-GPU is much more expensive Quadro Pro/GPU, bu... See more...
NVidia restricts using consumer-GPU for passthrough. There are some tweaks and mods that can fool Nvidia-detection by pretending your GTX/GT consumer-GPU is much more expensive Quadro Pro/GPU, but it is not that easy (includes bios tweaking and/or hw-mod) and does not work for all GPUs. Except for that, there is not much you can do. On the other side, AMD/ATI does not make these obstructions, and both Pro- as well as consumer-GPUs should work. That's why ATI is probably better suited for low-cost GPU-passthrough solution. The lowest-priced NVidia Quadro I was able to find is K420, for ~120€, with 300/12 GFlops (single/double precision). For comparison, ATI R7-240 offers better raw performance (384/24 GFlops) for less than half the price (~50€). Both should work for passthrough, and numbers speak for themselves...
You need 2 software-packages: 1. driver: this should be included in ESXi (at least some older version), if controller is listed as supported 2. smis-provider: so that you could see status of ... See more...
You need 2 software-packages: 1. driver: this should be included in ESXi (at least some older version), if controller is listed as supported 2. smis-provider: so that you could see status of your arrays/disks on health-page (can be installed additionally) (there also might be some management-software for ESXi) I recommend you check in advance if Adaptec has both driver and smis! Some time ago I found some adaptec-controller on HCL (iirc, it was of 5xxx series), so I ordered it. Installed ESXi, VMs, etc., and at the very end I found there was no smis-provider. I contacted Adaptec and they confirmed the situation: there is driver for ESXi, but no smis-provider, and thus no way of monitoring its health. What is such a raid-controller good for? And yet it was listed on VMware HCL...
If some HW is on HCL, it does not mean all its features are supported (sadly but true, I learned it hard way too). Very probably the raid-controller on board of your supermicro-mobo is of softwar... See more...
If some HW is on HCL, it does not mean all its features are supported (sadly but true, I learned it hard way too). Very probably the raid-controller on board of your supermicro-mobo is of software/fake/bios-type (which generally is not supported by ESXi). That's why you see disk, not array... There is not much you can do with it. Either buy add-on true raid-controller, or use individual disks attached to on-board sata/sas-ports (you can passthrough them to VM and ceate sw-raid there)...
You can change speed for vNIC in VM (i.e. with "ethtool" in case of linux), but generally traffic-shaping is not task for ESXi. It should be done by dedicated router...
Depends on which raid-controller you have. For all LSI/Avago models (incl. those re-branded) you can use command-line tool StorCLI or older MegaCLI (you can find vib/offline-bundle on lsi/avago w... See more...
Depends on which raid-controller you have. For all LSI/Avago models (incl. those re-branded) you can use command-line tool StorCLI or older MegaCLI (you can find vib/offline-bundle on lsi/avago web). All you need is cron to periodically check raid-arrays and a little shell-scripting...
This topic has been discussed here many times (might easily be the most frequently asked question), I do not understand what documentation do you want to see. Take it simply as fact. We have HCL,... See more...
This topic has been discussed here many times (might easily be the most frequently asked question), I do not understand what documentation do you want to see. Take it simply as fact. We have HCL, with list of hardware that does work. You will not find single fake-/bios-/software-raid controller there. And it is for very good reasons. Fake-raid controllers need OS-support, some kind of "driver", and there is none for ESXi. Moreover, fake-raid controllers are usually very simple, lacking any on-board cache and cpu. Even if you made them work with ESXi, you would get trully terrible performance...
I'm affraid even in case of SSDs missing controller-cache will have negative impact on performance, with solo/raid ratio being similar, as for HDD (so instead of ~500MB you will get maybe ~50MB/s... See more...
I'm affraid even in case of SSDs missing controller-cache will have negative impact on performance, with solo/raid ratio being similar, as for HDD (so instead of ~500MB you will get maybe ~50MB/s from SM863). If you buy such an expencive SSDs, it is a pitty not to use them properly. Two options come to my mind: A: get some 2nd-hand/used controller. I got mine M5016 on eBay, for ~200$, including cache and super-cap. One year later I found one more for ~60$ (some ISP was selling spare-parts). B: pass-through your disks/SSD to one VM, and create NAS with software-raid, for hosting other VMs. NAS-OS will do disk-caching and with vmxnet3-nics on local vswitch you'd get quite good network performance...
If ESXi can not see raid-volumes (and can see individual disks instead) it is software-/bios-/fake-raid you have. This is not supported by ESXi, and you can not use such a "raid-array" for datast... See more...
If ESXi can not see raid-volumes (and can see individual disks instead) it is software-/bios-/fake-raid you have. This is not supported by ESXi, and you can not use such a "raid-array" for datastore. At least not directly...
There is no on-board cache on this controller. ^ This is your problem: you are trying to use controller which is unsuitable for ESXi. Basically, you are running your disks un-cached, because... See more...
There is no on-board cache on this controller. ^ This is your problem: you are trying to use controller which is unsuitable for ESXi. Basically, you are running your disks un-cached, because ESXi (unlike any other modern OS) does *NOT* do disk-caching. What may be causing this constant HDD seek/thrashing while writing to the RAID 1 array When you write to raid-1 array, controller must write data-chunk to both drives, then read them back (from both disks), calculate crc and compare it (before going on to next data-chunk). This is trully terrible without cache. ...and is there a way to mitigate it in ESXi 6.0? Yes, there is one: buy some serious raid-controller, with on-board cache (the bigger&faster the better). Should I try downgrading the firmware to P19 and also reverting the VMware drivers to P19? Can I activate or deactivate some VMFS related features? No, because it does not make any sense. Can I from ESXi modify the drive cache behavior? No, and it does not make any sense either. Controller can not use disk-cache for its operatioins. In fact, some raid-controllers even turn the disk-cache (not controller-cache) off, to prevent data-loss in case of power-outage and to avoid double-caching (keeping the same data both in controller-cache and disk-cache)...
Maybe you could check what exactly is taking that long. You can see messages rotating on that "ESXi welcome screen" during boot-up (various services/daemons being started, etc) before login-promp... See more...
Maybe you could check what exactly is taking that long. You can see messages rotating on that "ESXi welcome screen" during boot-up (various services/daemons being started, etc) before login-prompt is offered. Just take note of what message is sitting there for quite long time...
I'd recommend to test-install other OS without changing anything in bios to check if it can see the whole memory. Or maybe you can just boot-up some live-cd/dvd/usb. But incorrect memory size ... See more...
I'd recommend to test-install other OS without changing anything in bios to check if it can see the whole memory. Or maybe you can just boot-up some live-cd/dvd/usb. But incorrect memory size is nothing new, and I have seen a few posts here on this forum. Some of us are "lucky" to loose just a little, otherwise as much as half of ram is lost. For example my server has 32GB (mobo, cpu, ram, everything on HCL), yet ESXi reports it as 31.4GB. Live-cd with linux running on the same server reports full 32GB and memtest86 did not find any problem. Some time ago I had other server with 8GB ram, and ESXi reported 6GB. I suppose it must be something wrong on ESXi-side. Maybe it requires some special bios-settings not needed by other os...
I'm not sure you can pass-through audio-ports of your on-board/southbridge sound chip. I think you need add-on pcie sound-card...
I have bought a few Intel dualport "ET" NICs on eBay for a few bucks and they all work without any problem. If you can, stay away from desktop-NICs (low performance, high cpu-load). Check HCL to ... See more...
I have bought a few Intel dualport "ET" NICs on eBay for a few bucks and they all work without any problem. If you can, stay away from desktop-NICs (low performance, high cpu-load). Check HCL to be sure, but if you pick server-models, you should be safe...
Can you try installing ESXi 5.0 (or 5.5)? I had this controller and I remember it was detected with all arrays. I did not have to do anything special during installation. I do not have it anymore... See more...
Can you try installing ESXi 5.0 (or 5.5)? I had this controller and I remember it was detected with all arrays. I did not have to do anything special during installation. I do not have it anymore so I can not test if ESXi 6.0 can detect it (and arrays)... btw when you jumpt to controller-bios (webios or how it is called) during boot-up, do you see arrays? Are they initialized?
MegaCLI is previous-gen command-line management tool. It is outdated, and not maintained anymore. You can still find it on Avago/LSI web (iirc, 8.07 version), but I do not recommend it. Use newer... See more...
MegaCLI is previous-gen command-line management tool. It is outdated, and not maintained anymore. You can still find it on Avago/LSI web (iirc, 8.07 version), but I do not recommend it. Use newer StorCLI tool instead...
You may be right. The last time I played with this it was ESXi 5.0. Anyway, there are hundreds of hw-monitoring chips (and specification is opened only for some of them). I can not imagine how on... See more...
You may be right. The last time I played with this it was ESXi 5.0. Anyway, there are hundreds of hw-monitoring chips (and specification is opened only for some of them). I can not imagine how one single universal script could work for all of them (unless it reads bios-values directly)...