I am working on the design of our Nexus 1000v vDS for use on HP BL490 G6 servers. The 8 pNICs are assigned as follows:
vmnic0,1: ESXi management, VSM-CTRL-PKT, VSM-MGT
vmnic2,3: vMotion
vmnic4,5: iSCSI, FT, Clustering heartbeats
vmnic6,7: Server and Client VM data traffic
Should I migrate all pNICs to the 1000v vDS, or should I leave vmnic 0,1 on a regular vSwitch and migrate all of the others to the vDS? If I did migrate all of the pNICs at a minimum I'd designate vmnic 0,1 as system so traffic could flow until the VSM could be reached. My inclination is to migrate all the pNICs, but I saw elsewhere on the forums comments that the VSM related networks and possibly the ESX(i) console are best left off the vDS.
Thoughts?
I think the QoS can be performed on the Nexus 1000V level, which you can always define the QoS policy for the necessary configuration for specified VLAN. This should able to overcome the possible challenge situation as you had suggested.
Regards,
Craig
vExpert 2009 & 2010
Netapp NCIE, NCDA
The problem is that the HP VC modules don't support QoS tagging, so you can't tag it on the 1000v.
The HPVC connection doesn't need to support QoS. The 1000v will throttle the outgoing traffic so any VMotion traffic doesn't bind up the entire NICs bandwidth. Once it egresses the phsyical NIC it will arrive on the incoming interface in the same manner.
Vmotion is normally a short "Burst" of memory being dumped across the link. Especially when dealing with 10G interfaces this is not an issues to share a VMotion interface with other traffic (including Mgmt). VMware have changed their "Best Practice" requirements for VMotion to include 10G interfaces now which to NOT require a dedicated NIC.
Regards,
Robert
Robert,
Thanks that's excellent information...made a big different in my final design.
Derek
Paul Kelly wrote:
(....)
4) I like to seperate iSCSI on to a seperate switch, be it a vSS or a vDS as I like to set jumbo frames at the switch level. Doing this while having other port groups on the same switch means you have to get quite messy with your configuration. Having a seperate vDS for iSCSI won't cost you anything extra, in fact if your iSCSI was seperated physically to a different switch this would make it more secure.
How do you set a Nexus 1000V to use jumbo frames at the switch level? When doing an esxcfg-vswitch -l, I get the following output. I have setup vmk nics on this switch to use 9000, but the switch itself still says 1500:
DVS Name Num Ports Used Ports Configured Ports MTU Uplinks
OPS-N1K-VSM 256 60 256 1500 vmnic9,vmnic5,vmnic8,vmnic1
Keeping in mind that I think this will change the MTU for all uplink ports on the switch as well. I could be wrong about that though.
The Edit Settings option is grayed out for a Nexus 1000V DVS.
Hello RBurns-WIS,
In order to follow some best practices, could i place the VSM management interface in a Standard Swtich, and place the Data&Control into a Nexus1000v port profiles?
Thanks
Diego
Yes.
Thanks Logiboy123,