OK. So I have this small shop. We have run vSphere on some desktop machines with Openfiler for years without issue. However, it is time to upgrade and while we would happily continue with Openfiler, we can't as you can no longer find motherboards that it will run on.
Looking for replacements.
First thing we wanted to know was, how much faster can our disks run, if we go to 10gb. So, we built a 2019 server and enabled iSCSI on it. Put in a SSD drive. Connected it to one of our 6.7 hosts and blah. I would say it runs about the same as our Openfiler. Actually a little worse.
Some more details....
We have a 10g switch. We put a 10g Nic in one of our hosts. We have a 10g Nic in the physical machine running 2019. We have not enabled Jumboframes. We did validate the speed via command line. We aren't looking for perfection, we just want to know that there is an improvement worth having if we go to 10g.
I realize that 2019 isn't a supported platform for iSCSI and VMware. Then again, neither is Openfiler.
So everything works, it just isn't very fast. I did some reading online and saw some that say go to advanced setting of the NIC and disable all the offloading of anything. Did that. I disabled Flow Control. Nothing. I increased the transmit and Receive buffers. It did make a change, we see really fast speeds like a few hundred mb for a few seconds but it quickly dies back down to around 30 - 40 MB p/s. Openfiler does a real solid 100 - 110 MB on a gb network. I would expect twice that on a 10g with little or no other change to the system.
That the trns and rev buffer increased gives us those fast speed for a few seconds I think is a good indication of the issue. I am just not sure what. Is there something about the host that is limiting us and as soon as the buffers fill it slows down. Doesn't seem very likely. What setting in the host could limit the disk speed?
The SSD in the 2019 server is fast. I can copy a file to the same location on the drive in less than 5 sec that is over 5g in size.
Trying to think of any other details.....
Nothing else comes to mind.
So our next thought was, well let's compare to a SSD direct connected to the host.
We did this. Did a storage vMotion of an existing machine over to it. It sees about the same speed. Starts off a little faster, like around just over 100MB p/s but within 4 or 5 seconds it is down to like 40 or even 30MB p/s.
That seems like it should be MUCH faster. It isn't a mirror or the like and it was formatted VMFS 6. Full size, the VM is thin provisioned or the like.
You may want to look at freenas, while freenas isn't supported, truenas is and freenas is based off of that. You get access to some of the vaai capabilities you miss with unsupported things like windows server doing the nas. I've been running freenas in my home lab for a bit now with no real issues. I would strongly look at getting something supported if this is production and you rely in those vms, but your not in a supported area now anyway and I think freenas would be a step up/
Yeah, could be worth a look. Thanks.
I should have added that the VMkernel port for iSCSI is on its own network. We aren't sharing networks or the like. The 2019 server is seeing no load at all. CPU and Mem is at rock bottom. Disk actually shows barley any usage as well. So the system isn't being taxed in any way.
I just wanted to come back and update this.
We switched to FreeNAS. It did make a world of difference. We are now doing over 300MB/s when it comes to the simple test of copying a large file on a VM on the desktop. Remember we were doing around 40MB/s before.
To be clear, I am saying if we take a large file and copy it to the desktop and then just do a copy/paste right back to the desktop. We see it as a good read/write speed test.
This is all something that took maybe an hour to setup and that included the install. Which wouldn't work on the machine we wanted to use, by the way, so we did the install to a USB stick on another machine and just moved it to the first machine and it has worked fine so far.
So we will end up with about 14T of usable SSD space for about $10K. Is it reliable? I don't know, ask me in a year. We are going to build two for redundancy and we will still be at a lower price point of the cheapest branded arrays.
WIN! (maybe, again, ask me in a year!) 🙂