rickardnobel's Accepted Solutions

Marius - Roma wrote: What is the best way to dimension and place the Windows page file? For physical Windows Servers many people suggest to allocate a fixed size page file whose size is ... See more...
Marius - Roma wrote: What is the best way to dimension and place the Windows page file? For physical Windows Servers many people suggest to allocate a fixed size page file whose size is 1,5 times the RAM: does it make sense for hosted VMs as well? Is there any best practice or guideline to refere to? As noted by A.P. the 1.5 x RAM is really just an arbitrary number someone at Microsoft came up with at some moment and then has entered the best practices without no real valid reason. However, there could be some things to concider: Leave it to system managed is the most simple way, but could lead to higher fragmentation of your C: drive. If setting a fixed size you should really concider how much space you like Windows to be able to push into the pagefile with internal memory overcommit. Typically if the Windows machine have a  decent amount of RAM then the page file usage it quite low. If adding another virtual harddrive to the VM and put the pagefile.sys on that you have a little more complex setup, but might get savings if possible to exclude that drive in your backup tool and VM replication settings (if used). You could also possible put all VM guest files on some kind of thin provisioned LUN/NFS since it will consume lots of space, but be used very little.
zenking wrote: Our Win2k8 R2 VMs have the e1000 vnics, which appear to set 9014 MTU without any way to adjust. The vswitches and host nics are set to 9000, but my understanding is that th... See more...
zenking wrote: Our Win2k8 R2 VMs have the e1000 vnics, which appear to set 9014 MTU without any way to adjust. The vswitches and host nics are set to 9000, but my understanding is that they allow for the header overhead. The 5424 is stuck at 9000 and does not allow for the overhead. vmkping tests top out at 8972 (since the header is added), so the switch has to be the bottleneck unless I'm wrong about the vswitch supporting the header. The 9014 is just a bit stupid way of saying 9000.. The MTU is 9000 and the 14 extra is what they think is the layer two overhead (14 bytes at the front of the frame), but they also forgot the 4 byte CRC checksum at the end. Two things however: the 8972 is the expected payload of vmkping and actually show that your network is working fine. For some details of this see: http://rickardnobel.se/troubleshoot-jumbo-frames-with-vmkping Second: it does not matter if one part of the network should really be able to use, say 9200, and some other 9000. As long as your infrastructure (vSwitches and physical switches) also support at least 9000 then it would work fine. The hosts will not blindly send frames at their max size, but actually does a simple form of negotiation in the TCP session handshake. This means that your machines will select the lowest common MTU, and the 9200 in your case will not be needed. See this for some details: http://rickardnobel.se/different-jumbo-frames-settings-on-the-same-vlan
gaganvmware wrote: this mac belongs to window vm that has to be migrated between esx servers in cluster and this mac vm was migrated 7 times on 5/13 . is this a big deal ? is this normal... See more...
gaganvmware wrote: this mac belongs to window vm that has to be migrated between esx servers in cluster and this mac vm was migrated 7 times on 5/13 . is this a big deal ? is this normal .. right ? If you know that the VM with the actual MAC address was in fact moved by DRS / vMotion several times that day and you know the two switch ports mentioned in the log is connected to the two ESXi hosts then everything is perfectly normal. It might seem a bit unusual for the physical switch that a MAC address changes location several times during the day, but it is nothing more "strange" than a user walking around a large office and connecting to the physical network at different locations, sends some packets, and then disconnect and later connect at some other physical position in the network. The switches should have no problem with that fact. The only important is that the MAC address must never be learned from different locations at the same time. That would mean you have a network layer two loop, which is very dangerous. In your situation I would say you can safely ignore the switch error log.
TobiasM wrote: Hej Rickard I can´t see any message like "Snapshot consolidate needed" on the summary tab Ok, that is when vCenter notices that there is a mismatch between the snapsh... See more...
TobiasM wrote: Hej Rickard I can´t see any message like "Snapshot consolidate needed" on the summary tab Ok, that is when vCenter notices that there is a mismatch between the snapshot information (*.vmsd file) and the actualt snapshots being in use (delta-xxxx.vmdk). You have no option to remove/repair the snapshot from the Netapp side?
Paul Salotti wrote: But I've read elsewhere that VMXNET3 is experimental.  The VMXNET3 is not experimental, it is has been around since version 4.0 from 2009 or something and is typicall... See more...
Paul Salotti wrote: But I've read elsewhere that VMXNET3 is experimental.  The VMXNET3 is not experimental, it is has been around since version 4.0 from 2009 or something and is typically recommended for all machines where you want to optimize the networking performance.
On at least ESXi 5.1 you could use find /etc -mtime -3 to see files changed in the last three days for example.
Guardian1234 wrote: - Is this file overwritten at every boot or does it get archived/moved/saved for historical reference, or is it simply added/appended to at every bootup phase? I did t... See more...
Guardian1234 wrote: - Is this file overwritten at every boot or does it get archived/moved/saved for historical reference, or is it simply added/appended to at every bootup phase? I did two reboots of a ESXi 5.1 host and the boot.gz was 30506 Bytes in the first case and after second reboot 30416 Bytes. A great amount of new lines was added, but still the size was a few bytes less. This indicates strongly to me that the file is overwritten at each boot. (Server on persistent storage as well). You could read the file with zcat /var/log/boot.gz | more.
Check for snapshots on the VM! If having an old snapshot the old portgroup remains in the summary.
Rygar wrote: Native VLAN ID on ESXi/ESX VST Mode is not supported. Do not assign a VLAN to a port group that is same as the native VLAN ID of the physical switch. Native VLAN packets are not ... See more...
Rygar wrote: Native VLAN ID on ESXi/ESX VST Mode is not supported. Do not assign a VLAN to a port group that is same as the native VLAN ID of the physical switch. Native VLAN packets are not tagged with the VLAN ID on the outgoing traffic toward the ESXi/ESX host. Therefore, if the ESXi/ESX host is set to VST mode, it drops the packets that are lacking a VLAN tag.” The ESXi host will not necessarily drop untagged frames, but they need a portgroup with no VLAN defined (could be expressed as VLAN 0). This also means that the VM traffic from such portgroup will enter the physical switch port "native VLAN" (i.e. untagged vlan.) So my question is, Is it possible to have Default VLAN 1 (all workstations are on this VLAN)  talk to VLAN 1955? If they really are different VLANs on your physical network then you need to have some kind of routing between the two networks. Or do you see VLAN 1 and VLAN 1955 as the same network and broadcast domain?
jedijeff wrote: I guess I dont understand, that since the drivers are the same LLDP should work? Correct? That seems reasonable, it should not be anything else to configure on the vSphere... See more...
jedijeff wrote: I guess I dont understand, that since the drivers are the same LLDP should work? Correct? That seems reasonable, it should not be anything else to configure on the vSphere side to configure than to enable the feature. On your physical switches, are you sure LLDP is enabled on the other cluster? And do you see any incoming LLDP from the ESXi hosts on the switch ports?
vadood wrote: Again I want to configure etherchannel and route based on ip hash. But when I open settings for uplink port group, the policies for Teaming and Failover are greyed out and can... See more...
vadood wrote: Again I want to configure etherchannel and route based on ip hash. But when I open settings for uplink port group, the policies for Teaming and Failover are greyed out and cannot be changed. They apparently inherit configuration from somewhere else but I do not know where! You will have to set the NIC teaming policies at the portgroups actually and not on the more expected uplink group.
The very recommended best practice is to configure a separate VLAN with IP network used only for iSCSI, possible even with dedicated layer two switches used only for iSCSI, depending on the type ... See more...
The very recommended best practice is to configure a separate VLAN with IP network used only for iSCSI, possible even with dedicated layer two switches used only for iSCSI, depending on the type and quality of the switches. stevehoot wrote: Just to be 100% sure, what I appear to have done is correct, and that ESXI doesn't allow an iSCSI VMK to have a default gateway on it? It is not really that the storage network can not have a default gateway, but instead that the ESXi operating system (vmkernel) has IP stack with internal routing table shared over the different functions like management, vMotion, storage and other. This means that you have a common default gateway per host and not per function, and this will also most typically mean that the "common" default gateway will be on the management network.
A R wrote: If all VMs are in this single resource pool, and no new VMs are created anywhere else, does this have any potential negative performance implications? As long as no VMs are cre... See more...
A R wrote: If all VMs are in this single resource pool, and no new VMs are created anywhere else, does this have any potential negative performance implications? As long as no VMs are created outside of this Resource Pool it should not have any impact at all. If someone did however create a single VM outside the pool and set some really high shares value like 20000 then this single VM could be "worth" more than the entire other collection of VMs if there was congestion of CPU usage.
IT_Architect wrote: I already know the thin and thick sizes. If the sizes are well apart from each other it would be easy to see how much is transfered.
RSEngineer wrote: Rick, totally agree....one thing I am curious about, though, is how often EST mode IS/WAS actually used... I have seen it sometimes with Management and vMotion sharing t... See more...
RSEngineer wrote: Rick, totally agree....one thing I am curious about, though, is how often EST mode IS/WAS actually used... I have seen it sometimes with Management and vMotion sharing the same interface and both on VLAN 1 (native/untagged) on the physical switches, but for VM access it is not common of course.
Ranjna Aggarwal wrote: Thanks rickardnobel,                   Tell me one more thing Generally multiple vSwitches are used in Production Environments or only as few as possible. I tend t... See more...
Ranjna Aggarwal wrote: Thanks rickardnobel,                   Tell me one more thing Generally multiple vSwitches are used in Production Environments or only as few as possible. I tend to do very similar to what a.p. described above. A customer I am helping at the moment with a new environment will have something like: vSwitch0 - for Management and vMotion, using active/standby - two vmks and two vmnics, different VLAN and IP range vSwitch1 - for iSCSI - two vmk and two vmnics, using active/unused vSwitch2 - for production VM networking - several portgroups with different VLANs, two/maybe four vmnics - not decided and possible: vSwitch3 - for test VMs, where the wish is to totally physically separation and no risk for the test VM traffic to disturb the production VMs. So more than 4-5 vSwitches is most often not necessary, but possible. As for the memory overhead it seems like minimal, so it is just a combination of the amount of physical vmnic ports and keeping the virtual networking setup both flexible, secure and simple.
I can not see why you shouldnt be able to do what you describe. It is somewhat unusual, but it should work to create several VMK adapters on the same vSwitch and do the same active/unused configu... See more...
I can not see why you shouldnt be able to do what you describe. It is somewhat unusual, but it should work to create several VMK adapters on the same vSwitch and do the same active/unused configuration as you already did, but set another VLAN id and IP address on the new adapters. Of course the two logical VMK adapters will share the same network link, but that you are fully aware of. As long as it is not stated in some manual or KB article as an unsupported solution you should be fine.
Qwerty256 wrote: did the same, result in screenshot, pretty better isnt it ? So it was indeed the NUMA that caused it. Nice that you got your RDY back to normal!
johnxgr wrote: hp switch has not any configuration.they are all default and resetted. Are you sure that the HP switch does not have any configuration? Could you connect through the serial... See more...
johnxgr wrote: hp switch has not any configuration.they are all default and resetted. Are you sure that the HP switch does not have any configuration? Could you connect through the serial cable or telnet and do a "show run" and paste the result here? When you attach the physical cables, do you get link on the switch? Most HP switches does automatically fix straight / crossed TP cables, so it should not be the issue, but just to be sure. johnxgr wrote: so i have to configure vswitch and then i can connect esx with hp switch? The default vSwitch configuration on a new ESXi host should be enough for you to connect to any physical switch. If you are able to reach the host with vSphere Client when directly attached it should be enough.
jasonvp wrote: Rickard Nobel wrote: You can not have your two vmnics (physical NIC ports) connected to two vSwitches and at the same time have any "teaming". You will have to remove one o... See more...
jasonvp wrote: Rickard Nobel wrote: You can not have your two vmnics (physical NIC ports) connected to two vSwitches and at the same time have any "teaming". You will have to remove one of the vSwitches and recreate the portgroups on the remaining vSwitch. The VLANs will still isolate the different networks. Thanks for the pointers; I finally had an opportunity to try this out and it's working as expected.  I'd assign you the "correct answer" but apparently the forum won't let me since you already have a "helpful answer". Nice that you got it to work! When doing the actual configuration with vSwitch IP Hash and physical switch LAG config it could be a bit difficult to do things in a correct order to not lose connection to the ESXi host. You could select this message if you like.