jondehen's Posts

Thanks for the quick reply scott28tt​!  Is there any reason you recommend vSwitch0 over vSwitch1?  Or just personal preference? On the one hand, I like it on vSwitch0 because that's the only p... See more...
Thanks for the quick reply scott28tt​!  Is there any reason you recommend vSwitch0 over vSwitch1?  Or just personal preference? On the one hand, I like it on vSwitch0 because that's the only place where that management network needs to exist... so no need to configure the management VLAN on the physical switch along with the other VM VLANs. On the other hand, some OCD part of me wants ALL VM traffic to be on certain vSwitches only, and to only have a single management port group on vSwitch0. It's currently residing on vSwitch0 and just cannot decide if I want to move it or leave it.
Assuming we have the following 2x vSwitches (simplified for example, there are more in actuality): vSwitch0 - 10.1.10.1/24 - Management traffic vSwitch1 - various 10.1.X.1/24 port groups - VM ... See more...
Assuming we have the following 2x vSwitches (simplified for example, there are more in actuality): vSwitch0 - 10.1.10.1/24 - Management traffic vSwitch1 - various 10.1.X.1/24 port groups - VM traffic in different VLANs Our VCSA sits in the same management network as vSwitch0 (let's say, for example, it's IP is 10.1.10.100/24). Questions: It makes sense to me to have VCSA be in the same network as the Management traffic... correct? Is there a best practice or recommendation for which vSwitch to have VCSA on?  vSwitch0 or vSwitch1? To me I think VCSA would work equally well in either vSwitch, but since I'm redesigning things I thought I would see if anyone had opinions.  It would be the only VM NOT on a "VM only" vSwitch so kind of sticks out as an exception. Thanks!
I am having a difficult time understanding how to setup 100% redundant networks from an ESXi host to multiple NFS storage devices.  Please see the attached image for this question. Goal: Be ab... See more...
I am having a difficult time understanding how to setup 100% redundant networks from an ESXi host to multiple NFS storage devices.  Please see the attached image for this question. Goal: Be able to lose any single ethernet link OR any single switch without interruption to the NFS datastore. Thoughts: I want to leave the switches UNstacked (so can't use LACP) NASes supports ALB and static XOR Scenario A is ideal, where the switches are totally independent and not connected Using the default Route Based on Originating Virtual Port, traffic will only flow out of vmnic1 OR vmnic2, but not both at the same time (since I only have 1 vmkernal).  This means all NASes will use only vmnic1 or vmnic2 at any given time. Both switches are Cisco 10G Problems: Not being able to use both vmnic1 and vmnic2 at the same time seems to be a big issue for this scenario.  The second hurdle is that VMware can't "see" when one of the two NAS links becomes disconnected.  It can only detect its link and the switch. In scenario A, if all traffic is using vmnic1 and link 3 dies (but switch1 and link 1 are fine), then VMware will lose connection to NAS1 without failover, even though there is a physical path from NAS1 back to the host. In secnario B, NAS vendor has recommended against linking the switches together (link 7) due to unknown switch behavior in the event of a link failure.  But I'm not seeing a way to avoid this... Questions: How would scenario A be possible where I could lose links on the NASes and still have redundant connections? Or is link 7 really required? I don't believe adding a second VMK to the vSwitch and assigning each VMK to each vmnic would be beneficial in any way (because I believe VMware just picks one of the two VMKs only) I don't think I can use Beacon Probing because that will not detect NAS link disconnects (and really needs a different physical topology).  Right?  I don't think you can specify a list of targets to probe/test. Would a different load balancing algorithm be better than the default?  I would ideally like to team them without switch configuration such that I could use 2x of the bandwidth, but I don't see any options to allow that or prove advantages. Would NFS multi-pathing (included in 4.1) be a solution here? Any insight into how to achieve the goal would be GREATLY appreciated!
I would like to change the IP address of the VMK for the management network on an ESXi host easily and without downtime to VMs on a separate vSwitch. The old and new IP addresses are in the sa... See more...
I would like to change the IP address of the VMK for the management network on an ESXi host easily and without downtime to VMs on a separate vSwitch. The old and new IP addresses are in the same subnet (example below).  It's just a cosmetic update as the network/VLAN/gateway/etc will remain the same: OLD:  10.1.1.10 /24 NEW: 10.1.1.20 /24 Details: 3 ESXi hosts running 6.5 All IP addresses for host MGMT are static and in the same network vCenter installed and running with all three hosts connected All hosts have a single VMK for management, in vSwitch0, and all vSwitch0's have 2x NICs (see diagram below) Question: Is this as easy as changing it from vCenter by editing the IP on the VMk adapter?  Is there anything else to do besides this?  I would assume making this change through vCenter (instead of from the host itself) will also update vCenter and not cause the host to become disconnected from vCenter. From my understanding and experience, no VMs (running on a different vSwitch) will be affected.  I know I can login via the console as well to change the IP, but don't see why I would if the change is as simple as I think it is.  I have done this before on a standalone ESXi host but never one connected to vCenter. Any advice is helpful!  Thanks in advance!
André, thank you, I wasn't even aware that ALB existed on my NAS. Question: So would the attached topology work then, using two separate switches, one for each of the NAS connections?  If so, ... See more...
André, thank you, I wasn't even aware that ALB existed on my NAS. Question: So would the attached topology work then, using two separate switches, one for each of the NAS connections?  If so, I think I would like to use ALB on the NAS instead of NFS multipathing from the ESXi host. I would assume vmnic1 and vmnic2 would remain setup with the default "Route based on originating virtual port" but I'll have to do more research on this. Thanks!
Actually, it does support ALB.  It also supports LACP, balance XOR, or Active/Standby.
We would like any advice on "best practices" for how to setup redundant physical switches that connect our ESXi host(s) to a NFS NAS via 10GbE.  Please see the two attached diagrams.  We cann... See more...
We would like any advice on "best practices" for how to setup redundant physical switches that connect our ESXi host(s) to a NFS NAS via 10GbE.  Please see the two attached diagrams.  We cannot decide between stacking the switches or leaving them unstacked and separate (and using NFS multipathing).  I wanted to make sure either scenario was feasible. STACKED The primary downside I see to stacking the switches is that we wouldn't be able to do firmware updates without taking the entire cluster offline, because both switches in the stack would reboot at the same time.  Firmware updates are rare, but I want to be able to do them without shutting every VM down.  This is a significant disadvantage of stacking to me and I would like to avoid stacking if only for this reason. In a stacked scenario, we would most likely employ LACP on the NAS for redundant links to the switch stack, although I suppose we could still try multipathing with two NAS IPs.  I understand that I would probably use IP hash if we setup LACP instead of the default NIC teaming. UNSTACKED I believe we would have to use multipathing if we do NOT stack the switches and want to keep a single VMK.  I don't know which option I would want for the vmnic teaming in this scenario. My understanding is that round-robin is used with multipathing, so vmnic1 and vmnic2 would choose either of the two NAS's IPs automatically.  This is why the switches would need a link between them, so that either vmnic could connect to either NAS IP through either of the switches. This setup would let us reboot either switch without taking down the entire cluster. Can anyone offer any insight into either STACKED or UNSTACKED design, or using NFS Multipathing in general?  Am I overlooking anything important, or should either of these scenarios work? PS: I think both of these models cover if either switch were to have an actual outage.
Please see attached images.  Example A is a single port group, and example B is using two separate vSwitches. Are there any advantages to creating a separate vSwitch just for DMZ traffic over jus... See more...
Please see attached images.  Example A is a single port group, and example B is using two separate vSwitches. Are there any advantages to creating a separate vSwitch just for DMZ traffic over just placing DMZ traffic in a separate port group and using overrides to assign specific pNICs to each port group? We can assume that proper redundancy will be present everywhere, and that the same ESXi host will serve both production and DMZ traffic.  Also assume that the DMZ traffic will be plugged into a physical firewall.  Each port group is a separate VLAN.  Again, if a single vSwitch would be used, we would dedicate specific pNICs to each port group appropriately via overrides so that the DMZ port group could not share the pNICs of the others. I suppose I don't see any real difference in having a separate vSwitch vs doing port group overrides.  I don't believe one is any more secure than the other, but happy to learn otherwise!  Perhaps this is just preference and whatever is easier to manage?  I can imagine if I had 10 different DMZ VLANs that extra configuration would be required if the same vSwitch is used over just sticking those port groups on the switch and not worrying about where each pNIC was connected.  Any articles specific to security would be appreciated! Thanks!
We have a testing environment.  Today I broke out the vMotion and Provisioning VMKs to their own port groups.  I then assigned each of the Management, vMotion, and Provisioning port groups to the... See more...
We have a testing environment.  Today I broke out the vMotion and Provisioning VMKs to their own port groups.  I then assigned each of the Management, vMotion, and Provisioning port groups to their own physical NIC (via port group overrides).  I monitored all three ports for activity and only saw that the management port was used for backups.  Maybe someone will one day find this post helpful.
This is more of a theoretical question than anything. We currently have a single vSwitch0 that has three port groups: Management (default TCP/IP stack) vMotion VCSA VM (this is where we ru... See more...
This is more of a theoretical question than anything. We currently have a single vSwitch0 that has three port groups: Management (default TCP/IP stack) vMotion VCSA VM (this is where we run VCSA) All of the above are on the same VLAN.  We do not have the provisioning TCP/IP stack separated out (so I believe that happens over #1 still).  vSwitch0 has two 1GbE uplink NICs.  Please see the attached image for clarification. We want to use a Synology NAS as a backup server but would like backups to run over 10GbE instead of only 1GbE.  Assuming we add a new vmnic that is 10GbE to our host, how would we want to configure our vSwitch0 so that backups and restores run at 10GbE?  Should we separate out the vMotion or Provisioning VMK to their own vSwitch and add the 10GbE NIC to either (or both), or does the backup traffic run over the default Management TCP/IP stack even if those two are separated out? I know the answer might be: "it depends on how the backups function".  I am familiar with what the vMotion and Provisioning TCP/IP stacks are used for ( VMkernel Networking Layer ) but would like to get any comments on this scenario, if possible, especially in regards to Synology's Active Backup for Business VM solution. Thank you kindly and apologies if I've overlooked anything simple!