rickardnobel's Posts

8balakri wrote: Apart from setting the MTU on the Physical/vSwitch/PortGroup, we have to set the MTU on the Guest Os level right.? So, I am looking for some sort of setting to make the MTU... See more...
8balakri wrote: Apart from setting the MTU on the Physical/vSwitch/PortGroup, we have to set the MTU on the Guest Os level right.? So, I am looking for some sort of setting to make the MTU of the vNIc to 9000 from the ESXi layer rather than changing the value in the guest operating system. As noted, the MTU setting on the portgroup is only as a maximum allowed frame size at the vSwitch level, but is invisible both to the physical switches and the guest operating systems. There are no standard way to negotiate the maximum frame size at layer two (but is done on layer four within each TCP session, but that is something else.)
FlbayIT wrote: Does any changes need to be made to the VM at all? No, you do not need to do any changes or configuration at the VM level.
lebron wrote: 3. I created the ESX Admin Group in the AD, But I cannot see it on the ESX. Did you create the group with the exact name of "ESX Admins" (not admin as above)? Do y... See more...
lebron wrote: 3. I created the ESX Admin Group in the AD, But I cannot see it on the ESX. Did you create the group with the exact name of "ESX Admins" (not admin as above)? Do you access the ESXi host directly and not by vCenter Server when looking at this?
Which guest operating system are you using? Do you actually select to do a guest customize when deploying from template?
Awurz wrote: So does it really work to have on same vSwitch different port-groups with same VLAN Id? Yes, you can have several portgroups on the same vSwitch with the same VLAN id, th... See more...
Awurz wrote: So does it really work to have on same vSwitch different port-groups with same VLAN Id? Yes, you can have several portgroups on the same vSwitch with the same VLAN id, that is basic VMware networking.
Awurz wrote: => So in case that the physical uplink will fail, all VMs will switch guest OS bonding interface slave, so issue will no occur If you do some internal guest probing on s... See more...
Awurz wrote: => So in case that the physical uplink will fail, all VMs will switch guest OS bonding interface slave, so issue will no occur If you do some internal guest probing on some external address it might work, but you should be aware that this is not the way it is typically done or even meant from VMware to be used. Awurz wrote: but it is strange that if active bond interfaces of VMs are on different vswitch only communication to network gateway works but not between guests.  Was that between two VMs on the same ESXi host, but connected to two different vSwitches? Typically this should be no problem, but the traffic must always leave the ESXi host and go through physical switch before entering the other vSwitch. If this does not work there might be problems with the physical switch, it could be internal firewalls on the guests OR it could be that the bonding setup does something with the guest MAC addresses which confused either the vSwitches or the physical switches.
You should be aware that the vNIC (virtual network card in guest) will never go offline, no matter what will happen to the physical interfaces of the ESXi host.
Tango wrote: I have a Cisco C-Series C240 M3 with a CPU cycle of 2.4Ghz with 8 Cores. I am using Vsphere 5.1 and would like to install an OS as VM which require 2.8Ghz CPU Cycle. ... See more...
Tango wrote: I have a Cisco C-Series C240 M3 with a CPU cycle of 2.4Ghz with 8 Cores. I am using Vsphere 5.1 and would like to install an OS as VM which require 2.8Ghz CPU Cycle. Depending on the specification of your need, you could give the VM access to more CPU cycles by adding more vCPUs. By assigning for example 2 vCPU the guest operating system would be able to use 2 x 2.4 Ghz. As long as your application is decent multithreaded it could consume 4.8 Ghz of CPU cycles.
This bonding that you do inside the Linux guest, do you know what kind of method is uses to distribute the outgoing frames? And also, what is the reason for this setup? If I understand it corr... See more...
This bonding that you do inside the Linux guest, do you know what kind of method is uses to distribute the outgoing frames? And also, what is the reason for this setup? If I understand it correct than you have two vSwitches with identical portgroups, but only one uplink on each vSwitch and then each VM has two vNICs and doing internal teaming/bonding? Depending on what you try to actually do there might be more simple ways to setup the vSphere networking.
You might create a second vmdk for your VM(s), make it thin, reconfigure the pagefile inside Windows to that new disk/partition. If you have 16 GB of vRAM on the VM then 4 GB should be fine. Y... See more...
You might create a second vmdk for your VM(s), make it thin, reconfigure the pagefile inside Windows to that new disk/partition. If you have 16 GB of vRAM on the VM then 4 GB should be fine. You might (is possible) locate all VMs pagefile disks on another datastore, which could be a thin provisioned LUN from the SAN side and/or backed up by less expensive disks. If you are quite sure the internal Windows pagefile will be used very little then some slower disk could be very acceptable.
Joris85 wrote: How is this possible? I thought if you create a double link to 1 cisco switch, you need lacp? If you connect a vSwitch with for example two vmnic (physical NIC port) up... See more...
Joris85 wrote: How is this possible? I thought if you create a double link to 1 cisco switch, you need lacp? If you connect a vSwitch with for example two vmnic (physical NIC port) uplinks using the default "Port ID" nic teaming policy and connect to a physical switch then no LACP should / could be used. What is actually happening is that from the physical switch point of view is that your single vSwitch will appear as two different switches, each with a certain amount of MAC addresses connected to it. This effect is created since each VM is "pinned" to a uplink vmnic and all traffic from that VM will always leave through the same interface.
The configuration looks fine I think. It seems like your physical switch is doing some kind of flooding of the frames for some reason. Two things: 1. Are you totally sure the cables are ... See more...
The configuration looks fine I think. It seems like your physical switch is doing some kind of flooding of the frames for some reason. Two things: 1. Are you totally sure the cables are correct attached? One possible reason is that if the cables in some way are mismatched between the ESXi host and the physical switch this could be extremly confusing for the switches where MAC addresses appear "everywhere". Doublecheck and if possible enable CDP on your vSwitches and verify on the Cisco CLI. 2. Have you checked the logfiles of your physical switch? You might get some clues from any potential issues like MAC flapping and similar.
Agree with MKguy above. Using a multicast IP and ethernet address as default gateway will typically mean that all traffic to the default gateway will be flooded at ALL ports in the LAN, unless th... See more...
Agree with MKguy above. Using a multicast IP and ethernet address as default gateway will typically mean that all traffic to the default gateway will be flooded at ALL ports in the LAN, unless the switches has support (and enabled) for IGMP snooping.
You could do a somewhat similar action with the standard vSwitch, however only on the portgroup level and not single port as with Distributed vSwitch.
It should be possible, but careful to not allow the two Windows write access at the same time. Just be very sure to first detach it from the first host and disable the iSCSI service, then re-m... See more...
It should be possible, but careful to not allow the two Windows write access at the same time. Just be very sure to first detach it from the first host and disable the iSCSI service, then re-mask the LUN at the iSCSI target to allow the second server, then setup the iSCSI initator on the new Windows.
Marius - Roma wrote: What is the best way to dimension and place the Windows page file? For physical Windows Servers many people suggest to allocate a fixed size page file whose size is ... See more...
Marius - Roma wrote: What is the best way to dimension and place the Windows page file? For physical Windows Servers many people suggest to allocate a fixed size page file whose size is 1,5 times the RAM: does it make sense for hosted VMs as well? Is there any best practice or guideline to refere to? As noted by A.P. the 1.5 x RAM is really just an arbitrary number someone at Microsoft came up with at some moment and then has entered the best practices without no real valid reason. However, there could be some things to concider: Leave it to system managed is the most simple way, but could lead to higher fragmentation of your C: drive. If setting a fixed size you should really concider how much space you like Windows to be able to push into the pagefile with internal memory overcommit. Typically if the Windows machine have a  decent amount of RAM then the page file usage it quite low. If adding another virtual harddrive to the VM and put the pagefile.sys on that you have a little more complex setup, but might get savings if possible to exclude that drive in your backup tool and VM replication settings (if used). You could also possible put all VM guest files on some kind of thin provisioned LUN/NFS since it will consume lots of space, but be used very little.
iceman10 wrote: We don't have separate subnet for this, same subnet for everything in vSphere. Should I still have two switches to separate the traffic? If you do not have much traffi... See more...
iceman10 wrote: We don't have separate subnet for this, same subnet for everything in vSphere. Should I still have two switches to separate the traffic? If you do not have much traffic at all you will likely not need to add any additional physical switches. It could however be good for increased redundancy. As for different subnets it is a good best practice to have separate vlans for VMs, ESXi management and vMotion. (Also for NFS and iSCSI if used.) If it is a quite small installation it might not impact to much to use the same VLAN / IP subnet all over, however you could quite easy change the IP network for vMotion just to get it separate logically from the rest of the traffic.
zenking wrote: Our Win2k8 R2 VMs have the e1000 vnics, which appear to set 9014 MTU without any way to adjust. The vswitches and host nics are set to 9000, but my understanding is that th... See more...
zenking wrote: Our Win2k8 R2 VMs have the e1000 vnics, which appear to set 9014 MTU without any way to adjust. The vswitches and host nics are set to 9000, but my understanding is that they allow for the header overhead. The 5424 is stuck at 9000 and does not allow for the overhead. vmkping tests top out at 8972 (since the header is added), so the switch has to be the bottleneck unless I'm wrong about the vswitch supporting the header. The 9014 is just a bit stupid way of saying 9000.. The MTU is 9000 and the 14 extra is what they think is the layer two overhead (14 bytes at the front of the frame), but they also forgot the 4 byte CRC checksum at the end. Two things however: the 8972 is the expected payload of vmkping and actually show that your network is working fine. For some details of this see: http://rickardnobel.se/troubleshoot-jumbo-frames-with-vmkping Second: it does not matter if one part of the network should really be able to use, say 9200, and some other 9000. As long as your infrastructure (vSwitches and physical switches) also support at least 9000 then it would work fine. The hosts will not blindly send frames at their max size, but actually does a simple form of negotiation in the TCP session handshake. This means that your machines will select the lowest common MTU, and the 9200 in your case will not be needed. See this for some details: http://rickardnobel.se/different-jumbo-frames-settings-on-the-same-vlan
Any update on this problem? Since it really should not work like that it would be interesting to know the reason.
kbensch wrote: We have a load of VM's that have multiple network cards each on it own VLAN. How can I restrict the traffic on the virtualisation layer to a single purpose. For example, I d... See more...
kbensch wrote: We have a load of VM's that have multiple network cards each on it own VLAN. How can I restrict the traffic on the virtualisation layer to a single purpose. For example, I dont want to allow ssh or rdp traffic between servers other than the primary interface. The secondary interface is used for monitoring and the tertiary interface is used for SAN access. If you want to prevent any SSH or RDP going from a certain interface you might also create/configure the local firewall of these computers to prevent incoming sessions on the other interfaces.