VMware Cloud Community
cmp828a
Contributor
Contributor

600ms read latency on ESXi6.0 on Dell R510

Hello,

I just joined the VMware user group and just got my paid subscription going. I hope I am asking this in the right topic area.

I am self taught with VMWare and have used ESXi 5.5 and just recently built a new server for my home lab with 6.0 and the free license through the regular VMware site.

I was able to get a Power Edge R510 , bios revision  1.12.0 with dual 6 core X5675 3Ghz processors using a Perc6/i integrated bus / SAS / SATA controller.

At the moment, I decided to just build a RAID 5 volume with the 8 2TB hard drives with one on-line spare. Then I loaded the ESXi 6.0 on this volume and also at the moment have my datastore as well on this same volume.

For my lab systems when I want to run VMs, I figure since I would only run maybe 2 VM's at most at any given time, having the ESXi OS and the VMS all running on the same volume might  not be a big issue. I need to see if I can use SD card for the OS or flash drive, and keep the OS off the data store and volume of the VMs.  The VMs typically run fine, but when I perform say Microsoft Software Patch updates, The Window's 2016 Server VM at times will become non responsive and basically freeze up. The entire VM console stops responding. But just come back in an hour or two and it will be patched up and ready for a reboot.

I checked logs and see read latency as high as 600 to 700ms. In your thoughts, is this maybe due to running the ESXi OS and Vm's from the same drive volume? The system has at the moment 24 gigs ram and by Monday will have 64 Gigs. The 2016 server is configured for 8 gigs so ram is not an issue. I was running into this Latency with just running and patching up a single VM and no other VM's running.  Sadly I have limited resources and have to run with SATA, 5400 RPM WD blue drives so the 5400RPM is a bottle neck over say running the 10,000 or 15,000 SAS drives. But again it is my home lab / test systems.

I am thinking of building a new ESXi server with a HP DL360 G7 that is dual 6 core CPU's. I want to run ESXi6.5 on it, and I know the hardware in it will run with it since I tested it out with 6.5. I see the normal free license from VMware for the ESXi 6.5 is one socket and unlimited cores where the 6.0 is two sockets and unlimited cores, so that is why I went with 6.0 on the Dell.  I see the HP has a SD card slot and I am going to get a SD card and load ESXi6.5 on it and then just have the drives for the DataStore  for the VMs and see if that helps.

Anyone have best practice thoughts concerning my Dell R510 configuration and  for building the ESXi 6.5  on my HP DL360?

I have not yet gotten my confirmation information yet to go about downloading  the various VMWare Products and their one year license keys yet. Can anyone tell me if the ESXi 6.5 server / hyper visor is available for download and if it's license allows for two sockets and unlimited cores?

I am an I.T. systems analyst, at work and we are 90% Virtual with ESXi5.5 and soon to be going to 6.0. I am working to gain my experience with ESXi and VMware products.

Chad

0 Kudos
11 Replies
DavoudTeimouri
Virtuoso
Virtuoso

This is server for personal lab but you must add enough resources.

Disk latency is related to generating lot of SCSI I/O, even more than provided I/O by disk or RAID Group.

You must identify your machines workload and then make decision about type of disk and RAID.

If your test are write sensitive RAID 10 is better than RAID 5 and if the test is read sensitive RAID 5 is good.

You should use 15K or 10K SAS disks, SATA and disk with lower RPM is not good even for testing purpose.

-------------------------------------------------------------------------------------
Davoud Teimouri - https://www.teimouri.net - Twitter: @davoud_teimouri Facebook: https://www.facebook.com/teimouri.net/
0 Kudos
cmp828a
Contributor
Contributor

Hello,

Thank you. I seemed to not have any issues with write. I was able to upload around 1TB of files to the 2016 Windows server being used as a file server over NFS to it from my Linux box which is my main desk computer. And if I am not doing Microsoft updates, and just moving around the Server desktop and doing basic functions the VM seems to over all respond okay.  I Believe I have one or two internal USB slots on the motherboard or server and might try running the ESXi off that and just have the main drive volume for the VMs. Maybe I should slice the eight drives up into two raid 5 volumes or maybe see about a RAID 10 volume.

Your information is appreciated.

0 Kudos
bluefirestorm
Champion
Champion

The hypervisor on the same disk as datastore is unlikely causing the slowdown. The ESXi hypervisor does not read/write the disk much once it is booted up. This is quite obvious when you use an external USB flash stick with an LED indicator; it doesn't light up anymore after the ESXi has booted up.

When you run the Powershell Get-SpeculationControlSettings inside the Windows 2016 VM, it will show "Windows OS support for PCID performance optimization is enabled" as FALSE because the X5675 CPU is a Westmere CPU. For Windows OS to have TRUE in that setting, the INVPCID instruction has to be available. INVPCID instruction is available in Haswell and later generations. Westmere CPU supports PCID in the TLB but Windows 10/2016 will not make use of it unless the INVPCID instruction is available. The Meltdown patch affects I/O intensive tasks severely (network and disk I/O) as the TLB gets flushed for every context switch. In the Windows 2016 VM, running a software patch update would be hit twice (network I/O to get the update and disk I/O to write the download) as its virtual TLB will be getting flushed constantly.

https://support.microsoft.com/en-us/help/4074629/understanding-the-output-of-get-speculationcontrols...

So if the slowdown you are seeing is caused by the Meltdown patch, changing to faster disks is unlikely to make the problem go away. The DL360 G7 is unlikely to be any much better as it is using the same Westmere generation CPU as the PowerEdge R510. It will be better to get a system with Haswell CPU or later; even a desktop/laptop Haswell will likely cope better with the Meltdown patch than the Westmere Xeon.

0 Kudos
cmp828a
Contributor
Contributor

Hello,

Thank you very much for that information. So I should drop back to server 2012 and limit the number of windows 10 VMS on it until I can buy a used but newer server running the next gen of chips. We are still mainly server 2012 at work so that is good and okay for me to run the 2012 servers. I guess I can make an OVA or OVF of the two servers I have for future use and run the Server 2012's for now. Since you mention Windows 10 and Server 2016, I am assuming 2012 and Windows 8.1 and 7 are no issue then to run and will spin up them.

Before I built this new server, I was running mostly just Linux, windows 7 and 2012 server on my ESXI 5.5 server, and not the 2016 server. I had not noticed any differences in the Windows 10 VM, but then that server was on a HP Gen 5 ML370 running dual Quad Cores.  Maybe I might just need to go back to running that as my VM Server and be happy with 8 cores.  Main reason I switched was to gain a few cores. I believe the Adaptec Raid cards I had for it are still supported in 6.0.  I wish the 6.5 free license was the same as 5.5 and 6.0 for two sockets and unlimited cores.  Work is still running the 5.5 ESXi, but we are migrating to the 6.0 ESXi and still running on Server 2012 so as far as my Lab and training, to align my skills for my job, I am still matched.

I really appreciate your information! Thank you.

0 Kudos
bluefirestorm
Champion
Champion

I mentioned only Windows 10/2016 because I am pretty sure that Windows 10 has the PCID Meltdown mitigation as TRUE as long as the CPU/vCPU has the INVPCID instruction. If I mask out the INVPCID in a Windows 10 VM with a Skylake CPU, the PCID mitigation would return as FALSE. It would be reasonable to think Windows 2016 (server equivalent of Windows 10) would behave the same way as Windows 10 in this specific situation.

As for Windows 7 (the desktop equivalent of Windows 2008), in the Get-SpeculationControlSettings link, it is already explicitly stated Windows 2008 does not support it. Windows 7 was released in late 2009 long before the Haswell chips were introduced (early 2013) so it is unlikely it would be re-engineered to make use of it.

As for Windows 8/8.1 and Windows 2012/2012 R2, I don't know of the status whether any of these support of PCID/INVPCID. It seems unlikely for Windows 8/2012 given the timeline of their releases (before Haswell CPUs) except maybe Windows 8.1 (2014) and 2012 R2 (late 2013).

So dropping back to use older operating systems is not going to avoid the Meltdown patch performance hit. However, the reverse is if you have Haswell or later CPU, older Windows operating systems such as Windows 7/2008 is not going to make use of the INVPCID instruction.

If I am not mistaken, it is similar on the Linux side, the KPTI (the Meltdown patch for Linux kernels) will not make use of the PCID in the TLB if the INVPCID instruction is not available in the CPU/vCPU.

As for dual socket CPU, the free licence ESXi 6.5 does not put a limit on the number of sockets but the maximum vCPU for any VM for free licence ESXi 6.5 is restricted at 8.

https://www.vmware.com/products/vsphere-hypervisor.html#getting-started

0 Kudos
cmp828a
Contributor
Contributor

Hello,

Thank you for the additional information. For now I can play around and try some things. Sounds like the HP ML350 Gen 8 I am looking to buy sometime in Late Spring or Summer I will need to allocate to being the ESXi server. When I was talking about Sockets and Cores, I was mentioning for the physical server running the ESXi on, the R510. When I check the free license it noted 1 socket / CPU and unlimited cores, and that was for the Physical server  and not for any specific VM machine. So until I am able to find and buy a server with the new CPU chips I will keep the performance limitations in mind.  I do not have any Windows 10,8.x or 7 VM running on it, so as you say I may or May not have an issue due to the CPU Chips.  I am very happy to join Vmug, I am already learning things I never thought to think about. I know between hardware and firmware, and that what was supported in a earlier version of ESXi is not always supported in the newer, and I am always checking the compatibility list for like RAID cards and NIC Cards, but never thought about specific CPU chips. Since ESXi installed and saw all my hardware I figure all is good. I have run into issues on the HP side with RAID controllers and then the NIC cards and their firmware versions not being supported. I normally run on HP servers but HP and their Specific ESXi install tailored for the HP systems can be hit and miss. SO I thought well Dell seems to be more supported in VMWARE and got a R510 for a good price and then boosted it from single Quad Core cpu to dual 6 Core.

In light of this CPU chip info, would you have a recommendation in either the HP Servers or Dell Servers of what model / generation of Server to look at buying? Again I have to buy used, and I assume servers that came out the first year using the Haswell chips as I read is what I want to buy. I will look into this chip and see in Dell and HP where that Chip began to be used and buy in that line.

Thank you. While I play with my current configuration and work with the 2012 and various Windows 7,8.x, 10 I will try to post my results to this post.

Chad

0 Kudos
bluefirestorm
Champion
Champion

Revisiting the Windows Server 2016 VM, you don't say how many vCPUs are allocated to it. There can be a perverse situation if you allocate more vCPU than needed, the idle vCPUs preempts the busy vCPU. Example, if you assign 8 vCPUs and only 1-2 vCPUs are truly busy, the other 6-7 vCPUs could preempt the 1-2 vCPU that are busy. Yes they are suppose to be idle but if I am not mistaken idle process will execute the HLT instruction which causes a VMEXIT. So you might want to try reducing the number of vCPUs to what is realistically utilised.

ML350 G8 are using Sandy Bridge/Ivy Bridge Xeon 26xx CPUs. https://h20195.www2.hpe.com/v2/GetPDF.aspx/c04128239.pdf

The advantage of the AVX instruction set you can gain over Westmere depends on the application(s) inside the VM(s) has been recompiled to take advantage of AVX and application workloads. There could be possible benefits with the reduced cycles in VMEXIT situation. VMEXIT is when the VM hands control back to the hypervisor and the hypervisor has to save the VM state which incurs host CPU cycles. Intel has reduced the number of cycles required for this VMEXIT transition over the different CPU generations. When it comes to VM execution, the less VMEXITs there are the better it is. Of course there is the inverse of VMEXIT which is VMENTRY and this transition also add to the host CPU cycle overhead. Haswell also has a VMFUNC instruction which is also intended to reduce the need/frequency of VMEXITs.

As for used server with Haswell Xeon 26xx CPU, I doubt if you can get your hands on one at a cheap price. Note that I am not a chartered accountant/CPA but this is what I think why.

Equipment purchased by a company is considered a capital expenditure and it has depreciation cost over the estimated useful lifespan of the equipment. For computer equipment such as servers/desktops/laptops, usually the depreciation is spread over 3-6 years (the actual number of years depends on tax jurisdiction/laws and the equipment, laptop might be 3 years, desktop might be 4 years, etc). Considering the first Haswell Xeon E5-26xx CPUs were launched in Q3 2014 https://ark.intel.com/products/codename/42174/Haswell , businesses which bought servers with these CPUs hasn't seen fully depreciated equipment; so it is unlikely for them to upgrade and resell. Even if it is fully depreciated, they are not obliged to upgrade and replace it. If you do find them for resale now, perhaps it is from a liquidation sale or refurbished unit from return or maybe even someone disposing of stolen property. Furthermore, some companies lease equipment instead of buying them so the company will not have the right to sell equipment they don't own even if they upgrade by leasing newer servers; they just return back to the company they leased from and the leasing company finds another customer to lease the old kit to.

So you might be able to get a good deal for used desktop/workstation with Haswell i7/E3 Xeon now because the likely previous owners bought them for personal use and are not dictated by accounting depreciation rules. A good deal on used Haswell Xeon 26xx servers is at the very least 18-24 months away if we go by depreciation period of 5 years.

0 Kudos
cmp828a
Contributor
Contributor

Hello,

On one of the 2016 VM servers I am running it with one CPU and 3 cores. The 2016 VM that is the Domain controller is 1 CPU and 4 cores.   Any relationship between CPU and related cores? I know a core is basically a CPU in it's self, and VMware has you set how many CPU's and then cores for it.

Best configuration you might suggest for CPU and cores? Would I be better with two CPUs of one core each?

Thank you about the server information. A HP ML350 G8 has been on my sights to buy as my main workstation PC, and have been planning to buy one sometime this year. I see them for $500 to $1,000 depending on what is in them on E-bay, and was figuring on spending $500 to $800 for one used. SO I might just buy one for the ESXi server and then user the R510 for my main work / desk computer.

I have thought of workstations, but I just like and prefer Server units, since most of what I do and run with them are really server OS systems. I do appreciate the information and had been doing more digging on Dell and HP servers and what class / generation supported what CPU families.

Chad

0 Kudos
bluefirestorm
Champion
Champion

I don't have any proof that odd vCPU count can be problematic. But I look at computers as binary entities and numbers that are powers of 2 or at least even is preferred (the exception being 1 which is 2 to the power of 0). It would be better to also to mirror what is available in the physical CPU world (I don't think there are any CPU with 3 cores in the market).

As for 2 virtual sockets/1 virtual core versus 1 virtual socket/2 virtual cores, I don't think it matters much in performance considering from what I understand from your original post that the ESXi 6.0 in the R510 only recognised 1 physical CPU; so there is no vNUMA to be concerned with. But the rule of thumb is "wide and flat".

You can look at this:

https://blogs.vmware.com/vsphere/2014/05/checking-vnuma-topology.html

As for the ML350 G8, here are my two cents. You have to ask yourself do you really need a dual CPU ML350 G8 system. Even if the VMs are server OS, what it sees is the same virtual machine (whether the VM is running within ESXi or Workstation Pro on Windows or Fusion on macOS, or nested inside another ESXi VM, the VM wouldn't really know the difference). The difference in the physical hardware only matters to the ESXi itself and whether you need the server hardware features such as redundant power supply, rack mount capability, server management which I think are overkill for a home lab. Using a desktop or a workstation such as Dell Precision, HP Z series, or Lenovo Thinkstation, your chances to get GPU passthrough working with graphic cards that are not in the HCL are higher than using server hardware (assuming you have intentions using GPU passthrough sometime in the near future). The Haswell i7/E3 Xeons are limited to 32GB RAM and 16 PCIe lanes so that could be a limiting factor for expandability and lessen appeal for a desktop Haswell. You could try searching for E5 1620 v3 under "PC Desktops & All in Ones" in eBay. But it is your money to spend and not mine; so it is up to you.

0 Kudos
cmp828a
Contributor
Contributor

Hello,

Again, than you for further thoughts and ideas. Actually on the DellR510, ESXI 6.0 sees both processors and I have 12 cores and with hyper threading it thinks it has 24 cores. I must not have stated things well, and that when I had ESXi 6.5 loaded on a dual CPU 6 core and loaded the free license, it would only license for 1 socket, unlimited cores. So that is why at the moment I am running the 6.0 ESXi.

But yes I will go back and align the CPU / Cores to increments of 2. That is a great thought to keep things as 2 / 4/ 6 / 8 and so forth.

Over the years, for my learning and keeping my skills current as I can, my home network / systems and lab, I try to mirror as close as I can to what is used in business. So that is why I buy servers. And I just like the ruggedness and redundancy found in servers. Yes I don't need them for home use, but I just prefer servers. But yes you make a good point about the workstations and is good information for other people who might be thinking of their home lab. At this time, GPU passthrough working with graphic cards is not of importance or need. I do not play games and do not do graphic intensive things like image or video edit or rendering. The level of video ram and capability provided in the VM settings works well for my needs. My main goal is to match up server hardware and CPU to the recommended specifications of ESXi 6.0 and 6.5. I figure the HP ML350 with the Haswell will relieve the issues, and then I will have a server ready to run ESXi6.5. I think the Gen 8 can run 12 core CPU's, then I could run it with one CPU of say 12 cores and then update to ESXi 6.5. My main computer I use is Linux, and there is no native Linux ESXi client. So I really like the idea of using ESXi6.5 and do it all natively in Linux from the web browser. Right now I have a Windows 10 VM running on my VMWare 12.5 workstation Linux version, I bought last summer to run a Windows VM with the client to work directly with the ESXi server, and then to the VM's by console.

Thank you for the link and I will be reading up on it.

Chad

0 Kudos
cmp828a
Contributor
Contributor

Thank you for that information. So it is several aspects or areas of latency to generate the over all reading.  Thank you.  All of you have assisted me and given me great insight. I have a good understanding now of the issue and know the direction I will take.  Everyone who contributed to my post I thank you. Everything is appreciated.

Chad

0 Kudos