VMware Cloud Community
AnibalR
Contributor
Contributor

Vmotion and shared SAS storage

Hi,

I just confused with Shared SAS storage and the resources needed for HA/Vmotion. In a Shared SAS connectivity with the storage do I still need physical nics for VMotion?. ?:|

Using the best scenario, how many nics should I use to take full advantage of VSphere 4 Essential Plus with Shared SAS? how is the connection performed in this scenario? Does anyone have a picture to see how it looks? I need to connect 3 server with ESXi to this SAS Shared Storage

Why the VMotion Network is configured using the LAN Network instead of using the SAS conectivity?

Thank your for your help!!

AnibalR

0 Kudos
2 Replies
a_p_
Leadership
Leadership

I just confused with Shared SAS storage and the resources needed for HA/Vmotion. In a Shared SAS connectivity with the storage do I still need physical nics for VMotion?. ?:|

Why the VMotion Network is configured using the LAN Network instead of using the SAS conectivity?

Vmotion is the process of migrating the virtual machine workload from one host to another, it does not have anything to do with the virtual disks. This is done over the network. The SAS connection is only used for storage access.

Using the best scenario, how many nics should I use to take full advantage of VSphere 4 Essential Plus with Shared SAS? how is the connection performed in this scenario? Does anyone have a picture to see how it looks?

Depending on your needs for network connectivity for the virtual machines, I'd recommend you start with 4 NICs on each host. Use two of them for use with management and Vmotion and the other two for vitual machine networking. For the Management and Vmotion network you should configure the two port groups as Active/Standby (e.g. NIC1 active for management and NIC2 active for Vmotion)

This will look like this http://communities.vmware.com/servlet/JiveServlet/downloadImage/9334/Nics.jpg

André

golddiggie
Champion
Champion

I would also add that in addition to the active/standby configuration for the vSwitchs, you should also look to use connections from different NIC's... Such as if you have two onboard NIC's, add at least a single dual port (Gb, server class, Intel of course) card and use one of it's connections for the management network configuration and the other for the VM traffic vSwitch... If you have four onboard NIC's already, then I would still add at least a dual (if not quad) port NIC to the host and do the same... If you only have one NIC onboard, then add a pair of dual port NIC's to the host. Basically, build in enough redundancy so that you have NO single point of failure. I do this for every host server I build.

Since I tend to go the route of iSCSI SAN's for my virtual environments, I won't go below six pNIC's inside a host (ports). A happy minimum is really eight ports though (giving enough redundancy to avoid SPoF)... Any more, or ports, beyond that just eliminates (or reduces) potential network performance issues. Especially when the physical switches are properly configured and utilized (also with redundancy)...

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.