VMware Cloud Community
mikelane
Expert
Expert
Jump to solution

Lab Setup - Gigabit performance question

I would like to set up a small lab to play around with ESX for learning.

My plan is to initially set up one Dual Core AMD ESX Server (1xPATA hard drive for ESX install) connected via Gigabit ethernet to Openfiler iSCSI storage which will host all VMs (3x Hard drive / Raid 5) - so how many e1000 NIC cards would you recommend for performance?.

I know that it is recommended to have seperate e1000 for console / vmotion / 'production' but I will probably have to lump all of these services together (when i get to building ESX number 2) over 2 gigiabit connections on each of the ESX machines and the Openfiler machine (a total of 6 e1000 connections divided equally between the 3 machines).

Is the number of NICs in my plan going to hinder network performance or will the performance be acceptable (or is an 8 port gigabit switch and 6 gigabit connections not enough)?

ESX would be dual core AMD chip at 2.3 Ghz with upto 4 Gb of RAM. Openfiler would be AMD 2200XP with 1.5Gb RAM + SATA Raid 5 Card.

Any advice is much appreciated.

0 Kudos
1 Solution

Accepted Solutions
Dave_Mishchenko
Immortal
Immortal
Jump to solution

Unfortunately it's not supported by ESX. ESX currently only supports Qlogic iSCSI HBAs - see page 13 for the models supported - http://www.vmware.com/pdf/vi3_io_guide.pdf.

Is this lab just going to be for you? If so you'll fine with just 2 NICs in each host. I have a similar setup 2/3 NICs per ESX host running about 10 VMs each - 3 hosts in total plus Starwind as my iSCSI target. My performance bottleneck are my physical drives - 5 SATA2 in a RAID 5 array. For a read operations it can sustain about 40 MBps and 1000 IO/s and so far my VMs (mix of SQL, app, dcs, etc) haven't pushed that limit yet.

View solution in original post

0 Kudos
12 Replies
Rumple
Virtuoso
Virtuoso
Jump to solution

Most systems come with at least 2 builtin NICs. Depending on your motherboard you may need to add in a couple PCI Intel cards anyhow.

You could easily use 2 for playing around although 3 would be nice. I believe the SC and the iscsi nic's need to be on the same network anyhow so just port group the sc switch and use the other for the LAN.

If you plan on adding a second server then I would really use 3... 1 for iscsi, 1 for sc/vmotion together and 1 for lan.

Its not as if you are going to be hammering the sc with backups while you are trying to vmotion or anything so it should be just fine.

RParker
Immortal
Immortal
Jump to solution

Rumple is right. I lump all my services on the single gig port. In 20 plus years of tech support, I have yet to see a single NIC fail. Usually its a driver issue or hardware in the machine that fails, but never a NIC.

Onboard NICS are a bad idea, but since you are playing around, it should be fine. I would add a PCIe NIC though later, if you start to use ESX more.

0 Kudos
Rumple
Virtuoso
Virtuoso
Jump to solution

My god...RParker and I agree on something..lol

The only time I've seen physical nic's fail (and i saw it en mass over 6 months) was when someone was running an enterprise application on a bunch of pizza box sunfire boxes and they were so heavily overloaded that the onboard nic's were dying like crazy. Sun replaced about 12 mothboards on the 6 servers that were running the database pieces of the application. All the identical servers running different compoents were fine.

That was def the exception and not the rule though...

0 Kudos
RParker
Immortal
Immortal
Jump to solution

Hey! just what are you implying? Smiley Happy Are you saying I am hard to get along with?

I can be agreeable.. as long as I get my way... haha..

0 Kudos
Rumple
Virtuoso
Virtuoso
Jump to solution

I'd just say we have different styles of Administration is all :O)

0 Kudos
mikelane
Expert
Expert
Jump to solution

Thanks for the replies guys.

I forgot to mention that I have two Intel Pro 1000 MTs (PCI) on the way for my first ESX box.

RParker - so you have everything going over a single gigabit ethernet NIC on your ESX server? If so what is the performance like loading VMs from iSCSI over your LAN? Which is to say will i see improved performance lumping everything over two for example or is the point of multiple NICs purely to segregate ESX traffic?

I don't mind trying to add more if I am really going to see a benefit but my initial limitation of two e1000 NICs is that I want to build a small form factor ESX server and I am only going to get 2xPCI slots on a micro ATX board.

If I really need 3 e1000 NICs can anyone recommend a PCI express model that I should be looking for?

And thanks again guys for the advice.

0 Kudos
RParker
Immortal
Immortal
Jump to solution

Hmmm.. well, I guess I only have Vmotion, console, vmkernel, and backup on the GigE but we use Fibre not iSCSI.

Besides, you want to use an HBA for iSCSI not the GigE even though it is capable. The ESX will suffer moreso than just network performance. You could increase the CPU by 20-30% just by NOT using an iSCSI HBA, so I would recommend using iSCI HBA, and everything else on the same port as the Gig NIC.

We have dual port Intel Cards PCIe, and they perform quite well. I don't suffer any connectivity problems.

VMotion is fairly quick process (depends on RAM size of VM). Most can be accomplished in 20 seconds or less. The console port is only during management of the ESX server, and how it communicates with the VI server if you have one. VM's themselves don't use much traffic, and since we don't have high NIC usage on VM's they are basically only get used during RDP sessions. Taking this all into account, a Gig port has quite a lot of bandwidth, and even trying to force the issue by simulataneous doing a VM convert of a VM direct to an ESX server, FTP files from that ESX server to another Windows machine with a GigE nic, and purposely VMotion files on and off that server, we STILL don't get more than 40% utilization on the NIC.

Gig is a BOAT load of bandwidth, and since the majority of what you do on ESX is mostly VM traffic, you shouldn't have any performance issues, which is why I stress the fact that people should ONLY use PCIe NIC's, because if I did this same test on another server with internal NIC's, I can easily render an ESX server useless.

So I have done lots of empirical testing and have found that Internal NIC's are not anywhere as good as PCIe based NICs, especially Intel, those are fantastic NIC's. I have tried to push them, and they are very efficient.

mikelane
Expert
Expert
Jump to solution

You will have to excuse my ignorance but will something like this do?

The only gigabit HBAs i could see were Fibre ones - I have no idea if this is gigabit or not ...

I understand that a HBA encapsulates data for transmission over copper and also makes sure that everything arrives in the right order.

So looking at one for the first time I am tempted to think aloud and ask if one of the ports connects directly to another HBA while the second connects to a NIC on each machine? Or do I actually literally get two ports on the network so to speak?

I was a little surprised to see HBAs in the $500 plus range but hopefully something like this will do for me?

I appreciate the advise, even though I do not want to spend a fortune learning ESX I do want to get something that will perform reasonably well also.

0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

Unfortunately it's not supported by ESX. ESX currently only supports Qlogic iSCSI HBAs - see page 13 for the models supported - http://www.vmware.com/pdf/vi3_io_guide.pdf.

Is this lab just going to be for you? If so you'll fine with just 2 NICs in each host. I have a similar setup 2/3 NICs per ESX host running about 10 VMs each - 3 hosts in total plus Starwind as my iSCSI target. My performance bottleneck are my physical drives - 5 SATA2 in a RAID 5 array. For a read operations it can sustain about 40 MBps and 1000 IO/s and so far my VMs (mix of SQL, app, dcs, etc) haven't pushed that limit yet.

0 Kudos
mikelane
Expert
Expert
Jump to solution

Yes this lab is just going to be for me Smiley Happy

I was thinking that the storage must be the limiting factor (as with any computer!) ... I will be running much less than 30 VMs so I think I should be fine with my 3 x SATA harddrives in RAID 5 using Openfiler. I might be able to squeeze up to 10 VMs out of 8GB of RAM though Smiley Wink

Now that I know that CPU usage will be offloaded by the NICs on to the CPU I will think about maybe upping my CPU ~ although my aim really was to get something that idled without drawing vast amounts of power.

Thanks very much for your advice guys, I feel much happier about my planned build now.

Dave if you have time to post I am curious as to what other hardware you have in your 3 ESX servers, as a comparison.

Which version of Starwind are you using ~ I assume not the free one?

0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

For StarWind I'm using an NFR of the professional version. There was a post a while back with a free offer for the license and mine is good for a year. So far it's run well and RocketDivision support has been very good. Once it expires I may move to something on Linux but I'm more of a Windows guy so it was much easier for me to setup.

My servers

IBM x335 - single 2.6 GHz Xeon - 4 GB RAM

IBM x336 - single 3.0 GHz Xeon - 5 GB RAM

Asus P5M2/SAS - 2.4 quad core Xeon - 8 GB RAM.

The Asus is a more recent addition and I just started with the x335 with local storage. There have been lots of times when I had the CPU maxed on the IBMs, but it's been more the case that I was running low on memory. As mentioned I run a mix of servers including domain controllers, firewalls, SQL Server, Oracle, app server, Exchange, Citrix but most of the time I may be only using 3 or 4 at a time so most of them just sit there using 100 MHz to keep running.

As aside, I would also give some thought to backup. While this is just test, I've had the occassional VM go bad so I'm using www.esxpress.com (free for full backups) to backup to a seperate disk on my file server.

0 Kudos
mikelane
Expert
Expert
Jump to solution

Just wanted to thanks you guys again for all your help!

0 Kudos