VMware Cloud Community
pingnithin
Enthusiast
Enthusiast
Jump to solution

Will 1 GB network be sufficient ?

Hi,

We have an implementation of 5 ESXi servers with an iSCSI SAN (DELL EQUALLOGIC PS 6220). No vCenter. 80 physical machines will be converted to VMs.

There are 4 x 1GB NICs available for each of these 5 servers. The design for this vmware network configuration is attached to this post. The storage is connected using iscsi port binding (round robin setup).

In vSwitch0, we have 2 NICs used for VMtraffic and management network. In vSwitch1, we have 2 NICs used for iSCSI traffic alone.

My question would be about the network utilization of these servers. Will 1 GB NICs be enough for a host with 20 VMs ? Will there be any performance crunch ?

We can enable Jumbo Frames if required. Please suggest.

Regards,

Nithin

Nithin Radhakrishnan www.systemadminguide.in
0 Kudos
1 Solution

Accepted Solutions
JarryG
Expert
Expert
Jump to solution

Not sure if those 1gbit NICs are enough for you, that depends on your VMs load. But from security point of view it is not a good idea to put management-port and VMs on the same switch. If any VM got compromised, it could be used for attack against ESXi. IMHO management port should be completely separated from VMs.

Personally I would use vSwitch0 for management and iscsi2/3, and vSwitch1 only for VMs. Depending on traffic, you could later move one NIC from vswitch0 to vswitch1 or vice-versa...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉

View solution in original post

0 Kudos
7 Replies
rcporto
Leadership
Leadership
Jump to solution

Without know the network utilization ratio of the physical servers we can't ensure that your design will not suffer any impact... did you run any capacity planner ?

About the Jumbo Frames, I recommend you enable this to the iSCSI traffic.

---

Richardson Porto
Senior Infrastructure Specialist
LinkedIn: http://linkedin.com/in/richardsonporto
0 Kudos
JPM300
Commander
Commander
Jump to solution

Hello pingnithin,

If you are only putting 20 VM's per host you should be okay, as most server OS's rarely use 100% of a nics bandwidth.  They acutally use commonly 10-20% which is why VMware can get so many VM's on a host.  Also even with LACP on your psyhical hosts due to the way LACP and other trunking/loadbalancing protocols work usally each instruction set is still only sent down 1 pipe or if it is split, its never a true 50/50.  Non the less if your unsure as to what your current network requirements are you can get in touch with a VMware partner and have them run the VMware Capacity Planner tool for you, or if you want to run your own tool you can look into Microsofts MAP tool or others that will log the traffic for week or so to get some numbers on the network utliaization.

The bigger thing I would look into is how you are going to perform you backups once you virtulize.  If each host was running 4 link trunk, if it wasn't NFT and was LACP your backup times will slow down considerbly due it the servers going from 4 1GBnics to 1GB nic for backup purposes.   One way around this is to backup the VM as a hole in whats called SAN mode with some Backup Software.  Many vendors like Veem and Symantec have this option where it grabs the VM's directly from the SAN over the iSCSI network instead of going through the virtual network to help backup the VM's faster.

I hope this has helped

JarryG
Expert
Expert
Jump to solution

Not sure if those 1gbit NICs are enough for you, that depends on your VMs load. But from security point of view it is not a good idea to put management-port and VMs on the same switch. If any VM got compromised, it could be used for attack against ESXi. IMHO management port should be completely separated from VMs.

Personally I would use vSwitch0 for management and iscsi2/3, and vSwitch1 only for VMs. Depending on traffic, you could later move one NIC from vswitch0 to vswitch1 or vice-versa...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
0 Kudos
pingnithin
Enthusiast
Enthusiast
Jump to solution

Thanks for the good post.

We don't have a distributed switch. So can I still make use of LACP ?

Regarding the backup, this is a development environment. Therefore the backup will be taken only on demand.

Nithin Radhakrishnan www.systemadminguide.in
0 Kudos
JPM300
Commander
Commander
Jump to solution

If you are not running ESXi 5.5 with a distributed switch using the vSphere-Web client to leverage the new abiltlies VDS and 5.5 has brought with LACP I would say its not worth the head-ache

Here is a good article in LACP vs LBT(Vmares new load balancing trunk with VDS)

http://longwhiteclouds.com/2012/04/10/etherchannel-and-ip-hash-or-load-based-teaming/

http://frankdenneman.nl/2011/02/24/ip-hash-versus-lbt/

If you are using Standard Switches I would just use Original Port ID and leave it at that.  If you move to VDS I would go with LBT unless you have 1 VM that uses 80-90% of bandwidth consistantly.

Hope this has helped. 

pingnithin
Enthusiast
Enthusiast
Jump to solution

Will 1Gb NIC sufficient for the VMtraffic ?

Nithin Radhakrishnan www.systemadminguide.in
0 Kudos
JarryG
Expert
Expert
Jump to solution

You mean for VMs? That depends. I do not know what kind of VMs you are running. For lightly-loaded VMs it might be sufficient. But if you are running i.e. virtualised storage-server with many clients, even single 10Gb NIC might not be enough...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
0 Kudos