Hello,
Currently, I run a ESXi 6 physical host. It has the following specs:
2 x Intel Xeon V3-2630 CPU - 8 physical cores
128 GB RAM
2 Quad-Port NICs
4 On Board NICs running Tntel NICs
9 SAS Drives
3 SSDs
1 Adaptec 72405 RAID Controller
I've noticed when I transfer files between disks on Guests VMs while running on top of the ESXi Hypervisor, the storage speed is very, very slow.
When I disconnect ESXi and put a Windows Server 2012 R2 Datacenter as a hypervisor, the speeds return to normal.
To more specific, when transferring a large 100 GB file from disk 1 to disk 2 on a guest VM (or VM-to-VM or within the VM from one of its VMFS disk that is located on one physical disk to another VMFS disk located on another physical disk), storage performance goes initially in the first second at 60 Mb/s for read/write speeds to sustained transferred speeds of only 10 Mb/s for the next hour.
The physical disk are local storage. Each one of my physical hard drives (Seagate Constellaiton 4 TB at 7200 RPMs) are then connected via a Norco RPC-4224 Chasis. It has 24 bays. It is then connected to my Adaptec 7 Series RAID Controller, which is also connected via the PCI slot.
I have alraedy tried to troubleshoot this issue. I've done these things alrady:
Any ideas how I can resolve this?
After three weeks, I found the solution to my problem.
On Adaptec RAID cards, make sure you have the following enabled:
1) Log in to MaxView Storage Controller with administrative admin credentials.in your browser.
2) On the left-hand side of the navigation panel, select the RAID card controller option/link.
3) On the ribbon menu, under Controller tab, select Properties.
3) Change all your settings to below.
The important is changing the Performance Mode to either Big Block Bypass or OLTP / Database setting AND have Global Physical Devices Write Cache Policy set to Enable All.
Restart your server.
Afterwards. open up VMware vSphere Client (the desktop GUI), login into vCenter, go to the Host, select the Configuration tab on the top menu bar, on the left hand side select the Storage option, and for all your local storage disks, enable Storage I/O from the drop down menu.
I suggest that updating your storage controller firmware and driver.
Are you using vmdks larger than 6TBs ?
If yes - I would like to see the output of
vmkfstools -p 0 large-vmdk-flat.vmdk > /tmp/file
If you have the problem that I have in mind it can be detected like that.
After three weeks, I found the solution to my problem.
On Adaptec RAID cards, make sure you have the following enabled:
1) Log in to MaxView Storage Controller with administrative admin credentials.in your browser.
2) On the left-hand side of the navigation panel, select the RAID card controller option/link.
3) On the ribbon menu, under Controller tab, select Properties.
3) Change all your settings to below.
The important is changing the Performance Mode to either Big Block Bypass or OLTP / Database setting AND have Global Physical Devices Write Cache Policy set to Enable All.
Restart your server.
Afterwards. open up VMware vSphere Client (the desktop GUI), login into vCenter, go to the Host, select the Configuration tab on the top menu bar, on the left hand side select the Storage option, and for all your local storage disks, enable Storage I/O from the drop down menu.