VMware Cloud Community
SupportBDBC
Contributor
Contributor

Dell R910 + Equallogic in ESX 4.0

Dear all,

We are after some advise on setting up our new ESX hosts.

We have just purchased 3 new Dell R910's to replace our old 2950 hosts. One thing we are struggling to work out is how to allocate the networking to keep decent performance on iSCSI and networking.

I have attached a diagram on how we have worked out a split which we think will work, but as 2 of these will be replacing 8 hosts we need to make sure it will perform correctly.

We run all the hosts off, Dell Equallogic 2x P6000 1x6500 luns, each with 4 quad port modules (but two modules in standby)

Please could we have some feedback on what the community think will be a good split of network and iSCSI?

We have currently allocated like this:

2 x for heartbeat

2x for standby

9 x for networking

7 x for iSCSI

Best Regards

Ian

0 Kudos
8 Replies
ProPenguin
Hot Shot
Hot Shot

I can tell you that we run 4 Hosts with a total of 40VMs and I am not seeing any performance issues.  Our setup is below.  It is pretty simple.  Hope this helps.

Setup:

Network: 2 NICs (1000Mbs each)

iSCSI/HB/VMotion: 2 NICs (1000Mbs each)

SAN:

EqualLogic PS6500e: 4 NICs (1000Mbs each)

0 Kudos
SupportBDBC
Contributor
Contributor

Thanks for that. We are currently running 100 VM's, would you suggest giving more network ports to iSCSI than networking, keep them the same or more to networking?

We will probably be 130> VM's before the end of the year.

0 Kudos
ProPenguin
Hot Shot
Hot Shot

Just from my experience I would say what you currently have allocated looks good.  Now what I would recommend is setting up an HQ server if you haven't yet and just keep tabs on your SAN traffic.  Also keep an eye on your incoming and out going network traffic on your network ports and on your iSCSI ports.  That way if you notice one or the other is being hammered you can adjust accordingly.  That is the beauty of virtual is that you can adapt as needed.  Hope this helps.

0 Kudos
SupportBDBC
Contributor
Contributor

Thanks for your help. We have San HQ setup, if thats what you mean?

0 Kudos
ProPenguin
Hot Shot
Hot Shot

Yes SAN HQ. Smiley Happy Oh and welcome to the vmware community.

0 Kudos
SupportBDBC
Contributor
Contributor

Hi All,

We are getting close to moving accross to our new hosts, but with ESXi 4.1 now.

Two questions.

As our main site will function with only two of these beasts what happens if a host fails? Will VM's just go offline and we have to try and re-attach them back to the working host, we currently run 8 hosts so this has never really been an issue.

Should we keep Vcenter as a physical or move to a VM? What would happen if a host dies with Vcenter on it?

Thanks all.

0 Kudos
cirilramos
Contributor
Contributor

I would recommend creating a dedicated network and assigning dedicated nics for backup. Unless you are doing video rendering farm or high network applications, the lan/wan networks will rarely go beyond 1GB. The backup will use more bandwidth than your lan/wan networks.

I recommend to use mpio. On my experience with Equallogic, our bottleneck is always the iops. Network speed suffers when your applications use small blocks. SAN HQ will tell your iops maximum capacity depending on your current traffic behavior.  It is a good idea to run iometer and see how much throughput you can get on vmdk and direct iscsi connections, if you are planning use it. With this test, you will get an idea how much you throughput you can get from your Equallogic then base the number of nics you will assign for iscsi.

To answer your last questions, if you have HA enable, all guests on the dead host will go to other hosts. Read the vmware HA and DRS documentation. As for vcenter being a virtual guest host, I think HA and DRS will not kick in if vcenter so you need to manually migrate the vcenter and power it on on another hosts.  I still like the vcenter on physical server. It is hard to torubleshoot when the vcenter is down.

0 Kudos
JohnADCO
Expert
Expert

Most people with two ESXi hosts do just this.    You can actually have them added already so they are quicker to fire up if a host fails.

You fire it up on a working host if the host with Vcenter dies on you.

Now I like to setup two datacenters and have Vcenter that manages each data center running on a different cluster / datacenter.   But that's just me.

0 Kudos