HeathReynolds's Posts

In vSphere 6.0 they have support for NFS 3 and NFS 4.1. NFS 4.1 does add support for session trunking multi-pathing in vSphere, but at the trade off of not supporting other features like SIOC, St... See more...
In vSphere 6.0 they have support for NFS 3 and NFS 4.1. NFS 4.1 does add support for session trunking multi-pathing in vSphere, but at the trade off of not supporting other features like SIOC, Storage DRS, SRM, etc. There is a good comparison chart here that lays out what is supported with each protocol. https://pubs.vmware.com/vsphere-60/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc%2FGUID-8A929FE4-1207-4CC5-A086-7016D73C328F.html I'm a block storage guy so I haven't set it up yet. Totally agree to go active / active if possible, especially if you have VDS and NIOC. With the standard switch a lot of the times you get stuck having to use active / standby to separate traffic since you don't have NIOC. When we had a bunch of gig interfaces we could do physical separation, but with fewer 10G we end up having to manipulate active / standby.
The CSR1000v runs the same IOS XR that is used on the ASA routers. There are various versions of IOS XR that support the CSR1000v platform. It is as close as you will get to.what you are asking... See more...
The CSR1000v runs the same IOS XR that is used on the ASA routers. There are various versions of IOS XR that support the CSR1000v platform. It is as close as you will get to.what you are asking. See if cisco has launched their modeling labs product as well.
The CSR1000v is a virtual copy of IOS XE running in VMware. Virtual router with ASR functionalty including OTV.
Take a look at the capabilities of the nexus 1000v and the CSR1000v.
No problem, I've done this migration for thousands of virtual machines. Create your VDS object, uplinks, and port groups. Add the hosts to the VDS with half of the uplinks, leaving the other ha... See more...
No problem, I've done this migration for thousands of virtual machines. Create your VDS object, uplinks, and port groups. Add the hosts to the VDS with half of the uplinks, leaving the other half on the 1000v. Migrate the VM and VMk networking to VDS, and then move the remaining uplinks. You delete the 1000v switch object from the 1000v CLI before you break the SVS connection. Dont destroy the VSM until you have deleted the object. Pretty sure I have a blog post on it in the 1000v section of my site.
Create a CSV with your info and use a foreach loop in powercli to create each port group with the New-VDPortgroup commandlet. This should get you started. http://www.knightusn.com/home/mass... See more...
Create a CSV with your info and use a foreach loop in powercli to create each port group with the New-VDPortgroup commandlet. This should get you started. http://www.knightusn.com/home/massaddportgroupstovirtualdistributedswitchwithcsv
One thing to consider is that at least until you go to 6.0 you won't be able to vmotion between VDS objects. If this isn't a requirement then multiple looks workable for you as long as you don'... See more...
One thing to consider is that at least until you go to 6.0 you won't be able to vmotion between VDS objects. If this isn't a requirement then multiple looks workable for you as long as you don't mind creating VLANs in multiple places. On Thursday, September 25, 2014, johnw230873 <communities-emailer@vmware.com>
I think either one is effective as long as your upstream switches support multi-chassis ether channel like Cisco VSS or VPC.If you don't have multi-chassis etherchannel I would look at either run... See more...
I think either one is effective as long as your upstream switches support multi-chassis ether channel like Cisco VSS or VPC.If you don't have multi-chassis etherchannel I would look at either running multiple switches as you planned on, or running a single switch and manipulating the active / standby / unused to separate traffic.
No, you will create a port group for each VLAN. You can create a third VDS (or use a VSS) for MGMT if you really want, but you are also fine to continue making a port group for each VLAN on the e... See more...
No, you will create a port group for each VLAN. You can create a third VDS (or use a VSS) for MGMT if you really want, but you are also fine to continue making a port group for each VLAN on the existing switch. You will want to manipulate the active / standby / unused for your port groups to separate vMotion traffic from guest and management traffic. The main drawback to how it is configured now is that you are sending your traffic untagged, you will need to have the upstream ports configured as trunks and then add VLAN tags to your port groups. Your network team should be able to configure a "native VLAN" on the trunk for the same VLAN that is currently assigned. This will allow you to continue sending untagged traffic while you work to migrate your port groups to VLANs. With 8 uplinks per host you could do - MGMT - VLAN XXX - Active Uplink 1 , Standby Uplink 2, Unused all others vMotion - VLAN YYY - Active Uplink 2, Standby Uplink 1, Unused all others Desktops - VLAN DDD - Active Uplinks 3 - 8, unused 1,2 This would provide separation of traffic without needing to create another VDS. You have enough NICs that you have flexibility to do anything you want, so you definitely have the option of just creating a VDS MGMT and vMotion and moving these guys over. You do want to make sure that MGMT and vMotion are on separate subnets and VLANs. Edit - There are a lot of options for network configurations, and with 10G and the number of gig interfaces you have you could do any. You may want to read Chris Wahls networking for VMware book so you know all of your options and the trade offs.
Are you using gig ports or 10G? Is your storage IP or FC? You should be able to trunk to your physical switches and add port groups without adding physical NICs using VLANS. My normal VDS ... See more...
Are you using gig ports or 10G? Is your storage IP or FC? You should be able to trunk to your physical switches and add port groups without adding physical NICs using VLANS. My normal VDS config for gig networking with fiber channel is a single VDS with 4 uplinks. One uplink would be active for mgmt, standby for VMotion. 2nd uplink would be active for VMotion, standby for mgmt. Uplink 3 and 4 would be active for guest traffic. Create trunk ports on the physical switches and use a separate VLAN for each traffic type. Make uplink switch port configs identical for consistency.
With static binding your existing VMs will continue to function on the VDS while vCenter is down. When I was a customer we ran all interfaces on VDS. Even if your vCenter is virtual you should ... See more...
With static binding your existing VMs will continue to function on the VDS while vCenter is down. When I was a customer we ran all interfaces on VDS. Even if your vCenter is virtual you should be OK as long as you have static binding it should connect once you bring it back up. On Tue, Aug 5, 2014 at 2:36 PM, HectorF2 <communities-emailer@vmware.com>
Yes, I've monitored netflow with solarwinds. You configure netflow at the VDS level, and then have to turn it on for each port group you want.
In 5.5 they have basically reached feature parity. The decision between the two comes down to who is going to manage networking. If the networking team is going to manage networking, then giv... See more...
In 5.5 they have basically reached feature parity. The decision between the two comes down to who is going to manage networking. If the networking team is going to manage networking, then give them an NXOS interface they are familiar with and keep them out of vCenter. If the virtualization team is going to manage networking then do it from vCetner with the VDS. If you want NSX (logical firewalls or network virtualization) you need the VDS. On Thu, Jul 10, 2014 at 11:45 AM, DCSpooner <communities-emailer@vmware.com>
The VDS will work with one NIC, but you are going to want two for redundancy. I would Create your VDS, associate one NIC to it, migrate your vmkernel and virtual machines over, then move the se... See more...
The VDS will work with one NIC, but you are going to want two for redundancy. I would Create your VDS, associate one NIC to it, migrate your vmkernel and virtual machines over, then move the second NIC over to the VDS and delete the VSS. You can have multiple VDS, VSS, or a mix of both, but with only two NICs it would be a bad idea to have multiple since you would have one NIC to each swith and no redundancy. On Thursday, June 26, 2014, itpassionate77 <communities-emailer@vmware.com>
So there are a couple of places that you can manage VLAN tags in ESXi. Ethernet frames have VLAN tags inserted, and these tags tell the switch which VLAN that frame is destined for. The two imp... See more...
So there are a couple of places that you can manage VLAN tags in ESXi. Ethernet frames have VLAN tags inserted, and these tags tell the switch which VLAN that frame is destined for. The two important methods for us are : EST(External Switch Tagging) - In this case ESXi isn't aware of the VLAN tag ID. ESXi passes ethernet frames upstream to the physical switch, and the physical switch tags the frames based on the VLAN the port is assigned. The cisco configuration for this would look like "switchport access VLAN 10". In this case each physical switchport is only carrying a single VLAN. VST(Virtual Switch Tagging)- In this case ESXi is aware of the VLAN ID. You must assign each VMK interface and port group to a VLAN. The virtual switch inserts the VLAN tag into the ethernet header and then passes the frame to the upstream physical switch. The upstream physical switch is configured as a trunk, which allows multiple VLANs to pass across a single physical connection. This is probably the most common configuration of ESXi. Some cheap switches don't support VLANs, but any managed switch will. Take a look here for more info: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003806 On Tue, Jun 24, 2014 at 8:10 PM, vctl <communities-emailer@vmware.com>
The most common install for iSCSI would be so traffic between the hosts and the storage isn't routed, since a router there could reduce performance. If you had VLAN 10(192.168.1.0/24) iSCSI, ... See more...
The most common install for iSCSI would be so traffic between the hosts and the storage isn't routed, since a router there could reduce performance. If you had VLAN 10(192.168.1.0/24) iSCSI, VLAN 20 (192.168.2.0/24) ESX MGMT, and VLAN 30 (192.168.3.0/24) Guest VMs, and VLAN 40 (192.168.4.0/24) vMotion a deployment scenario could be something like : NIC1 - vSwitch 0 - MGMT VMK(192.168.1.10) active, vMotion VMK(192.168.4.10) standby NIC2 - vSwitch 1 - Guest VM port group (VLAN30) active NIC3 - vSwitch 2 - iSCSI VMK1(192.168.1.10) active NIC4 - vSwitch 2 - iSCSI VMK2(192.168.1.11) active NIC5 - vSwitch 1 - Guest VM port group (VLAN30) active NIC6 - vSwitch 0 - MGMT VMK(192.168.2.10) standby, vMotion VMK(192.168.4.10) active You would place you storage target on VLAN 10 with an IP of something like 192.168.1.8 and iSCSI traffic would remain on that VLAN. The default gateway configured in ESXi would be the router on VLAN 20 with an ip of something like 192.168.2.1. Hope the scenario helps lay out some options. On Tue, Jun 24, 2014 at 7:16 PM, vctl <communities-emailer@vmware.com>
This would maintain redundancy assuming the upstream physical switch supports multi chassis etherchannel (like Cisco VPC), but it wouldn't maintain the bandwidth of two iSCSI connections. LACP i... See more...
This would maintain redundancy assuming the upstream physical switch supports multi chassis etherchannel (like Cisco VPC), but it wouldn't maintain the bandwidth of two iSCSI connections. LACP is only going to hash the connection to a single link, to take advantage of the multiple links in an LACP etherchannel you are going to need multiple IP addresses. I haven't tried ISCSI on LACP, I know there are some checks when you bind the ISCSI vmkernel to the virtual storage adapter. I believe one of the checks is that the VMK is only active on a single link, but I'm not sure. It's been a little while since I set up ISCSI.
So you can't vMotion a VM from the standard switch to the distributed switch. Here is how we cut these over. Move half of the physical network interfaces on each host from your standard switch... See more...
So you can't vMotion a VM from the standard switch to the distributed switch. Here is how we cut these over. Move half of the physical network interfaces on each host from your standard switches to the new distributed switch. Now every host has access to the standard switch networks and the distributed switch networks. Make you port groups for MGMT and vMotion, then move the VMkernel interfaces over. Make you port groups for you VM networks, then move the VMs over. After you have moved all of the VMs you can then move the remaining physical NICs over to the distributed switch and delete the standard switch. I would do this over the course of one day, since unless you have a ton of network interfaces in your server you probably won't have network redundancy to your virtual switches during the migration.
I haven't tested this on the VDS, but on the VSS ID you had two port groups that used the same VLAN ID attached to a single virtual standard switch it will switch traffic between the two without ... See more...
I haven't tested this on the VDS, but on the VSS ID you had two port groups that used the same VLAN ID attached to a single virtual standard switch it will switch traffic between the two without sending the traffic to the physical network. So PG1 and PG2 but both are configured to use VLAN 200 would switch locally on the virtual switch.
Question on the CPU ready % metric available in the custom UI. Is this value the AVERAGE of CPU ready % for all vCPU on the VM, or the SUM of all CPU ready % for all vCPU on the VM. Some odd ... See more...
Question on the CPU ready % metric available in the custom UI. Is this value the AVERAGE of CPU ready % for all vCPU on the VM, or the SUM of all CPU ready % for all vCPU on the VM. Some odd values led me to question the metric, and think it could be the sum. If that is the case I would need to create a super metric and package to average % for all vCPUs? We are running 5.8 if it makes a difference.