I have installed ESXi 6.5 on IBM x3650M4 (Raid5, 8x300Gb on ServeRAID M5110e controller). The raid performance is very low (it drops to 6-7Mb / sec). Local copying of files on partitions of virtual machines takes a very long time.
On this physical server I made a test - I removed the ESX system and installed Windows 2008 R2. Raid on this system works very efficiently (250-300Mb / sec).
as part of tests on this IBM X3650M4, I installed ESX 6.7.0 Update 2 (Build 13981272), created a new virtual machine (win2008r2), I installed vmtools. The test of copying large files on partitions was very bad again. Transfer quickly drops to 5-10mb / sec.
All operations related to this server under the ESX6.5 system are very slow (recovering virtual machines using Veeam, copying / moving virtual machines to this server, copying files between virtual machines on this ESX, etc.)
The controller has uploaded the latest available firmware.
Where can the problem be that the server disk performance under the ESX system is so low?
I'd suggest in the first place that you make sure that your hardware is fully compatible with VMware ESXI 6.5/6.7. (VMware Compatibility Guide - System Search )
Upgrading also the bios of your server and all others firmwares would be good.
Can you provide more info regarding your configuration and environment to better understand what you're actually trying to achieve.
- If this for a production env. or just for testing ?
- Do you have any other IBM x3650M4 server and with the same issue ?
- Are you planning to implement vCenter and then add multiple IBM x3650M4 servers in a VSAN/Cluster ?
- What type of disk do you have HDD or SSD?
You said that ESXi is currently installed on the volume of your RAID5 (8x300Gb). If that's so, I'd recommend you to create a small volume let's say 5GB just for ESXi. The rest would be dedicated to your VMs on the volume of your RAID5.
FYI, know that RAID5 doesn't offer the best write performance, but read is quite good. So for a production env. I wouldn't have put RAID5 in place (with regular mechanic disk).
Anyway the performance are terrible with ESXi (6-7Mb/s) compared to what you have with a W2K8 R2 installation. But even with RAID5 you should have much better performance.
Does the ServeRAID M5110e controller have a flash-backed cache module installed? If so, is write caching enabled on the controller/array? Your current performance results appear to be inline with typical performance characteristics of disabled write caching on the RAID5 array.
Today I made a test by installing Windows 2012r2 64 bit on a physical machine. The transfer of partitions is even at the level of 500Mb / sec. The raid controller has BBU, but it looks like the battery is not working anymore and the question is, can it have such an effect on the ESX operation?
Advanced Battery Management:
Status: Failed <Battery has failed. Plese replace the battery pack>,
Voltage: Normal [9413 mV], Current: 0 mA, Design Capacity 306 Joules, Remaining Capacity: Not Applicble
I found an article:
looks very similar.
Yes indeed both OS have a different behaviour when installed on a physical machine.
Still this shouldn't affect ESXi performance because the OS loads in memory, but there is definitively an issue on writes.
Again, I recommend to install ESXi on a small volume on a RAID 1 in a production env. but for testing you could just go with a sd card.
Then reserverthe RAID 5 volume for VMs.
Anyway, you should really check the compatibility of all your hardware like I suggested before and also replace the battery of your RAID controller.
And as gregsn mentionned, check that the write caching is enabled once your battery will be replaced.
I installed ESX 6.5 on a different medium (USB 16GB) but the local volumen Raid5 under ESX works very slowly, after a few seconds the transfer falls down and stays on this level.
The M5110e controller is compatible with VMware: VMware Compatibility Guide - I/O Device Search
Now that you installed ESXi on a different volume, have you enabled the write cache on your raid controller yet.
This is really important for performance.
I can see that you're current write cache policy is set to "Write Through".
Could you set it to "Write Back" and see if there is any improvement.
It looks like the problem has been solved. The new battery (year of production 2012? :smileysilly: ) Does not display an error.
When I set "Default Write Cache Policy: Write Back" the status "Current Write Cache Policy was still Write Through".
On the "Write Through" setting it worked very slowly under ESX.
I had to set the "Default Write Cache Policy: Always Write Back" in Bios, only then the status "Current Write Cache Policy changed the setting itself to Write Back".
Efficiency under ESX is very good, the test for the Windows 2008 virtual machine showed constant transfer at the level of 350-400MB / sec.
Only in the status of the battery, the status "Curent: 0mA", I'm afraid it's supposed to be. After putting on the battery for a moment it was 24 then 16 until it dropped to 0 and it keeps going.
For now, I'm testing but it looks like it works as it should.
Good to hear that the new battery is working well and the efficiency is finally good under your ESXi server.
If you feel that my posts were helpful, I'll appreciate that you click on the "Helpful" button.