We currently have our five host configured with 8 pNics. We have the following VSwitches
vSwitch0 = Service Console and Vmotion on Private network with 1 pNic
vSwitch1 = ISCSI on private network with 2 pNics (1 pNic in standby)
vSwitch2 = Public network for VM's with 4 pNics
vSwitch3 = Service Console and VMkernel on Public network with 1 pNic
So the question is does anyone have suggestion on how to improve the network configuration?
Any thoughts or comments are greatly appreciated.
first, what was the thinking why there are 4 nics dedicated to VMs? was there a specific reason?
How many VMs/host? OS? what software is installed? Hardware specs?
without knowing the above, I'd break it down:
dual-nic the below:
1. COS public
2. VMotion + a COS connection if you wish (private)
3. iSCSI (private)
4. VM traffic
That'll use the 8 NICs you have available. gig for everything would be great, too.
If you found this posting to be useful, great. I didn't waste your time.
This was a design by another engineer. So I think the 4 nics for VM was due to thinking it would increase the bandwidth.
Each host will has about 15-25 VM's. Running Windows 2003 with mostly IIS web servers, a few SQL databases. Hardware dell 2950's with 32gb Ram. All Nics are Gig
Hi,
iSCSI will most likely be the heavy one for network usage. You have 1 active and one standby, I would set both active.
Use esxtop as a reference to where your network load is. It will give you some idea of what's needed.
Other thanthat it seems to be a fairly typical configuration.
At the least if that physical NIC that services switch 3 goes down, easy management of that host goes with it. To alleviate that, some goofy stuff would have to happen with dual-homing a virtual center VM so it can see the private and public side. Also, if the NIC servicing switch 1 dies, VMotion goes away as well. This may failover to switch 3 (where vmkernel is listed - not sure if that's what was meant).
In my opinion this isn't very standard at all. I think the most pressing issue is failover and uptime. Performance on a gig network is tough to flood with Windows boxes with decent hardware.
Our NFS dev farm pushes more VM traffic data than NFS data. Lots of cached data perhaps, i suppose. As a matter of fact, the first time i monitored traffic while several VMs were booting there were only kilobytes of data moving across the wire. I was really surprised.
If you found this posting to be useful, great. I didn't waste your time.
So will ISCSI bandwidth be aggregated over the two nics if they are both active?
It doesn't really aggregate as a trunk. The software iSCSI stack does not currently have multipathing.
The NIC team algorithm will try to spread the load across the team if there is more than one IP connection. So if you have multiple IP's on the iSCSI target host it will load both cards but not neccesarily equally.
You could achieve the same with active/standby by making sure that the hosts are split equally which is a good option as well.
e.g.
host1 active on NIC 1
host2 active on NIC 2
host3 active on NIC 1
...
Hello,
One security concern with this setup.... Putting either the SC or vMotion on a public network (i.e. available to the outside world not just an internal administrative network) is a major threat to your entire environment.
vMotion is cleartext protocol of the running memory image if the VMs are processing private data you are asking for that data to be sniffed with it being on a public network.
SC is the gateway to your Virtual Environment... And therefore has access to just about everything. This would also become an attack point.
Now this is using the normal definitions of 'public', hopefully you mean your normal internal network. Even so I would protect both these networks carefully.
Granted this all depends on your security policy, and how secure you really want to be, but when people use the word 'public', I immediately think of a DMZ and or direct attached to the internet. Which would scare most security types.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354, As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
The issue we saw was when trying to manage Virtual Center remotley from the VI client we were unable to connect to the VM's and SSH unless we were on a machine that was in the private network. Is there another way to configur this that I might be missing. Also vMotion is only enabled on the Private network.
Hello,
It is much easier to place the VC Server on the Administrative network. VC is just as important to protect as the ESX servers. You can access this remotely by using VC proxying. Another option is to place workstations or even VMs within the secure area and access them using RDP over SSH or some some other capability.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354, As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
If you have Citrix Presentation environment, you can publish secure application and access via web browser anywhere which is what you can do is deploy WinSCP, PuTTy and VC Client for administrative use only and secure down to only allow personnel access to VC server and ESX host. You can always place dummy Windows XP elsewhere secure it to use as a jump server as Edward has mentioned.
If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Regards,
Stefan Nguyen
iGeek Systems Inc.
VMware, Citrix, Microsoft Consultant
Here's how I have my VMWare ESX nics setup... I also run 8 pshycical nics in each host.
VMWare Network Setup with 8 nics
I run 4 vSwitches:
1 for NFS with 3 nics
1 for Virtual Machines and Service console with 3 nics
1 for vMotion with 2 nics
1 for testing with 0 nics
Matt Brown
EWU