HeathReynolds's Accepted Solutions

The most common install for iSCSI would be so traffic between the hosts and the storage isn't routed, since a router there could reduce performance. If you had VLAN 10(192.168.1.0/24) iSCSI, ... See more...
The most common install for iSCSI would be so traffic between the hosts and the storage isn't routed, since a router there could reduce performance. If you had VLAN 10(192.168.1.0/24) iSCSI, VLAN 20 (192.168.2.0/24) ESX MGMT, and VLAN 30 (192.168.3.0/24) Guest VMs, and VLAN 40 (192.168.4.0/24) vMotion a deployment scenario could be something like : NIC1 - vSwitch 0 - MGMT VMK(192.168.1.10) active, vMotion VMK(192.168.4.10) standby NIC2 - vSwitch 1 - Guest VM port group (VLAN30) active NIC3 - vSwitch 2 - iSCSI VMK1(192.168.1.10) active NIC4 - vSwitch 2 - iSCSI VMK2(192.168.1.11) active NIC5 - vSwitch 1 - Guest VM port group (VLAN30) active NIC6 - vSwitch 0 - MGMT VMK(192.168.2.10) standby, vMotion VMK(192.168.4.10) active You would place you storage target on VLAN 10 with an IP of something like 192.168.1.8 and iSCSI traffic would remain on that VLAN. The default gateway configured in ESXi would be the router on VLAN 20 with an ip of something like 192.168.2.1. Hope the scenario helps lay out some options. On Tue, Jun 24, 2014 at 7:16 PM, vctl <communities-emailer@vmware.com>
Aaron, it sounds like you have it figured out. You can think of the LAG configuration on the distributed switch as a profile that get's applied to each host. The physical upstream switch is confi... See more...
Aaron, it sounds like you have it figured out. You can think of the LAG configuration on the distributed switch as a profile that get's applied to each host. The physical upstream switch is configured with a LAG for each host.
VCloud director or VCAC driving VCNS can do this if you have cloud suite licensing.
You are on the right track. The ESX host only has one default gateway, and that default gateway should be on the MGMT vMkernel interface. Other interfaces like vMotion and Storage (NFS and iSC... See more...
You are on the right track. The ESX host only has one default gateway, and that default gateway should be on the MGMT vMkernel interface. Other interfaces like vMotion and Storage (NFS and iSCSI) typically shouldn't be routed and don't need a default gateway. In 5.5 VMware introduced multiple TCPIP stacks and the ability to assign a VMkernel interface to a stack, but this functionality isn't needed for most installs. You would simply place the vmotion interface of all of the hosts in your cluster in the same VLAN, and they will all be able to talk to each other without a L3 gateway on the VLAN. Same deal with you storage, put the hosts and the target interface of the storage on the same VLAN.
Here is a multi-NIC vMotion how to in 5.5 web client : http://www.heathreynolds.com/2014/02/multi-nic-vmotion-on-esxi-55.html Here is a deck with info on multi-NIC vMotion in 5.0 / 5.1 with... See more...
Here is a multi-NIC vMotion how to in 5.5 web client : http://www.heathreynolds.com/2014/02/multi-nic-vmotion-on-esxi-55.html Here is a deck with info on multi-NIC vMotion in 5.0 / 5.1 with VI client. http://www.heathreynolds.com/2012/08/my-presentation-from-inf-net2227-at.html I would use two gig nics in that situation. I've run into trouble with VMs with 32GB of RAM under heavy load. The vMotion will normally complete, but processes on the box can hang up. Once we went multi-NIC we had no trouble migrating the same box. Make sure you have a dedicated vMotion VLAN, no other VMkernel interfaces.
So you need four nics to meet your basic needs, and then you can use the other two to address areas you think might need additional bandwidth. Onboard 1 - Active MGMT, Standby vMotion Onboard... See more...
So you need four nics to meet your basic needs, and then you can use the other two to address areas you think might need additional bandwidth. Onboard 1 - Active MGMT, Standby vMotion Onboard 2 - Active Production Traffic Mez 1 - ? Mez 2 - ? Mez 3 - Active Production Traffic Mez 4 - Active vMotion, Standby Management You could use the other two mez interfaces for more active production traffic or multi-NIC vMotion? It depends on what the guests need. I like to use multi-NIC with gig networking if I will have guests with more than 16GB of RAM.
They won't be able to communicate without a separate VLAN on the upstream physical switch. if you have VCloud suite licensing for vCNS you could create a VXLAN allowing them to communicate, b... See more...
They won't be able to communicate without a separate VLAN on the upstream physical switch. if you have VCloud suite licensing for vCNS you could create a VXLAN allowing them to communicate, but the simpler solution would be to create a VLAN.
No, the datacenter object is the boundary for the distributed switch. Hopefully in the future VMware will deliver a version of vmotion that works across datacenters, or a distributed switch that ... See more...
No, the datacenter object is the boundary for the distributed switch. Hopefully in the future VMware will deliver a version of vmotion that works across datacenters, or a distributed switch that spans datacenters.
What version are you running? In 2.1 they went to a free and advanced model, depending on the features you need you may be able to upgrade and continue without additional licensing. You can p... See more...
What version are you running? In 2.1 they went to a free and advanced model, depending on the features you need you may be able to upgrade and continue without additional licensing. You can probably convert your current license to VSG license, contact your SE. http://blogs.cisco.com/tag/nexus-1000v/
You could duplicate your production enviroment with two virtual switches on each host, and just connect each vSwitch with one physical NIC. You won't have any redundancy, but this may be acceptab... See more...
You could duplicate your production enviroment with two virtual switches on each host, and just connect each vSwitch with one physical NIC. You won't have any redundancy, but this may be acceptable since this is a lab. If you want redundancy you will need to create a single vSwitch, with both physical NICs connected to it. You would create all of your VMK interfaces and guest port groups on this switch, and allow all of the vlans on both trunks to the extreme switch. In order to seperate your MGMT and VMotion traffic from your guest traffic you would need to go into the properties for each VMK or port group and select the option to "overide switch failover order". I would set the Vmotion active on physical NIC1, and standby on NIC2. I would set all other traffic active on NIC2, and standby on NIC1.
We run two dual port CNA, for a total of four converged 10g connections. Each CNA has one connection to each upstream fabric switch. Each link is used for FCoE, and also carries all VLANs (MGM... See more...
We run two dual port CNA, for a total of four converged 10g connections. Each CNA has one connection to each upstream fabric switch. Each link is used for FCoE, and also carries all VLANs (MGMT, vMotion, NFS, Guests). We run multi-adapter vmotion each link is used for VMotion traffic. We tag the traffic with COS values on the n1kv, and let the UCS fabric do ingress queuing to guarantee each class of traffic a portion of the link, but not limit it to only that portion if the other traffic types aren't consuming all of their allocation. With only a single CNA you won't have storage redundancy for FCoE, even if you can use your other 10G to create network redundancy.
Here is a mostly up to date comparison between the networking optioins: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/solution_overview_c22-526262.pdf The big driver with ... See more...
Here is a mostly up to date comparison between the networking optioins: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/solution_overview_c22-526262.pdf The big driver with most people running the 1000v I've talked to is giving visibility back to the the network team, and streamlining changes to the virtual network enviroment. In a large organazation with a network operations team they will create an SVI to route a new VLAN and then create the new VLAN on all of the distribution and access switches in the layer 2 domain, the 1000v just lets them go ahead and create it in on the hypervisor using a command set they already know.