MattiasN81's Accepted Solutions

I had the same problems in on of my labs, its not the exact same NIC but when i shifted from ne1000 driver to e1000e driver i got an huge improvment. Try to disable the ne1000 and use e1000e ins... See more...
I had the same problems in on of my labs, its not the exact same NIC but when i shifted from ne1000 driver to e1000e driver i got an huge improvment. Try to disable the ne1000 and use e1000e instead using this command, "esxcli system module set --enabled=false --module=ne1000", the reboot the host.
According to Barracuda Networks they are only supporting VMware Horizon environments, so if anything happens you will get a hard time getting support from both VMware and Barracuda. But still, i... See more...
According to Barracuda Networks they are only supporting VMware Horizon environments, so if anything happens you will get a hard time getting support from both VMware and Barracuda. But still, i have seen HAProxy installations with PSCs and they are working just fine, so trial and error and see what happens
Actually, you dont need to change it. Just as a physical switch the vSwitch just enables jumboframes when setting it to MTU 9000, it doesn't require you to use jumbos, it allows you to use MTUs ... See more...
Actually, you dont need to change it. Just as a physical switch the vSwitch just enables jumboframes when setting it to MTU 9000, it doesn't require you to use jumbos, it allows you to use MTUs from 1500-9000 However, the sender and the receiver NICs should always match MTU size (some arrays does accepts MTU1500 when Jumbos are configured though) Example Good Configs. 1. vmnic (mtu9000) -> vswitch (mtu9000) -> storage nic (mtu9000) 2. vmnic (mtu1500) -> vswitch (mtu9000) -> storage nic (mtu1500) Bad Configs. 1. vmnic (mtu9000) -> vswitch (mtu1500) -> storage nic (mtu9000) 2. vmnic (mtu1500) -> vswitch (mtu9000) -> storage nic (mtu9000)
There is no feature in vSphere that can accomplish this. the nearest thing you can use is to restrict cetain VMs to cross over to other NUMA nodes, not sure is this is possible in the free licen... See more...
There is no feature in vSphere that can accomplish this. the nearest thing you can use is to restrict cetain VMs to cross over to other NUMA nodes, not sure is this is possible in the free licence though.
‌I took a quick look on your laptop specs, it seems that the NiC is only 10/100. VMware removed all compability for 10/100 nics when they released ESXi 5.0 and i dont think anyone bothered to m... See more...
‌I took a quick look on your laptop specs, it seems that the NiC is only 10/100. VMware removed all compability for 10/100 nics when they released ESXi 5.0 and i dont think anyone bothered to make drivers for those nics since almost all nics are gigabit anyway
The thing is that almost all Turbo Boost CPUs can't run all cores at the same time at the highest clock frequency due to heat and stability. The maximum clock (boosted) frequency only applies wh... See more...
The thing is that almost all Turbo Boost CPUs can't run all cores at the same time at the highest clock frequency due to heat and stability. The maximum clock (boosted) frequency only applies when one ot two cores are active The E5-2620v3 has base clock of 2,4Ghz and a All-Core Boost frequency at max 2,6Ghz and a single core clock at 3,2Ghz We can take the E7-8890 v4 as an example, Intel's latest monster CPU with 24 cores and max clock frequancy at 3,4Ghz. If the CPU where able to run all cores simultaneously at 3,4Ghz you would actually need to cool the CPU with liquid nitrogen to even keep it alive
There are several factors that comes into play here. SIOC itself only acts on ESXi workloads and not other workloads handled by other storage operations such as RAID rebuilds, CIFS workloads a... See more...
There are several factors that comes into play here. SIOC itself only acts on ESXi workloads and not other workloads handled by other storage operations such as RAID rebuilds, CIFS workloads and so on, in your case according to the message "An unmanaged I/O workload is detected on a SIOC-enabled datastore"  SIOC detected a workload above specified threshold (25ms) but because ESXi detected the workload as non-esxi workload SIOC couldn't do anything with it other than report it. Here is where the tricky part comes in. In this case is actually was a VM that caused the high latency witch we deadly humans wound would say "Hey, a VM caused it so its sure as hell an esxi workload" well thats not entirely true. Depending what type of workload and how the storage array handles it plays a part how SIOC will react on it.therefore is crucial to have an array/solution that is supported with SIOC I can take an example from my own experience with SIOC and an EMC array running an unsupported setup with auto-tiering and FAST cache The problem was the same as yours, a VM did some stuff that resulted in high latency, the problem wasn't the VMs workload perse but when the VM started to do its thing the storage array did what it was supposed to do, place hot data in the cache move cold data to disks and kick in a tiering job, due to the extremely high workload on the VM the array couldn't keep up and the result was from VMwares perspective high latency on that datastore but SIOC couldn't do anything because is was never the VMs that caused the latency but storage operations in the backend. I hopes this clarify a little how SIOC operates
If a failure occurs the machine will boot up on another host even if you have a passthrough device attached to the VM. You can also enable vMotion support for the USB device so you can migrate t... See more...
If a failure occurs the machine will boot up on another host even if you have a passthrough device attached to the VM. You can also enable vMotion support for the USB device so you can migrate the VM live. Procedure 1 In the vSphere Client inventory, right-click the virtual machine and select Edit Settings. 2 Click the Hardware tab and click Add. 3 Select USB Device and click Next. 4 (Optional)  Select Support vMotion while device is connected. 5 If you do not plan to migrate a virtual machine with USB devices attached, deselect the Support vMotion option.
There is no webclient for administrering a standalone ESXi host, only the Windows client. However the vCenter Appliance have a built in deployment tool so for deploy a vcenter appliance you dont... See more...
There is no webclient for administrering a standalone ESXi host, only the Windows client. However the vCenter Appliance have a built in deployment tool so for deploy a vcenter appliance you dont need the C sharp client, but you need to be able to install the client integration plugin. If you need a Windows vCenter server you need to deploy it the old fashion way
The only supported way is to upgrade your seconday vcenter to 6.0 and then upgrade your VRM on that site to 6.0/6.1 * vSphere Replication 5.8 only supports vCenter server 5.1 up to 5.5 * vSph... See more...
The only supported way is to upgrade your seconday vcenter to 6.0 and then upgrade your VRM on that site to 6.0/6.1 * vSphere Replication 5.8 only supports vCenter server 5.1 up to 5.5 * vSphere Replication 6.0/6.1 only support vCenter Server 6 * vCenter Server 6.0 Update 1 only supports vSphere Replication 6.1 I have only seen problems when leaving one VRM on a older version, replication failures, dropped connections between the VRMs and so on, i dont think VMware even support mixed VRM versions.
The vStorage API is locked in the hypervisor-only license so a snapshot backup would not work, however Veeam has the ability to use SSH/SCP to backup (full copy) VMs.
-rw-------    1 root     root      100.0G Jan 25 13:41 Exchange-flat.vmdk -rw-------    1 root     root         471 Jan 25 13:45 Exchange.vmdk -rw-------    1 root     root        6.3M Feb  3 1... See more...
-rw-------    1 root     root      100.0G Jan 25 13:41 Exchange-flat.vmdk -rw-------    1 root     root         471 Jan 25 13:45 Exchange.vmdk -rw-------    1 root     root        6.3M Feb  3 13:28 Exchange_1-000001-ctk.vmdk -rw-------    1 root     root      560.2M Feb  3 13:30 Exchange_1-000001-delta.vmdk -rw-------    1 root     root         393 Feb  3 13:27 Exchange_1-000001.vmdk -rw-------    1 root     root        6.3M Jan 30 15:01 Exchange_1-ctk.vmdk -rw-------    1 root     root      100.0G Jan 30 15:01 Exchange_1-flat.vmdk -rw-------    1 root     root         535 Jan 30 14:58 Exchange_1.vmdk It seems that the file exchange_1 is the current one, and Exchange is orphaned * Exchange-flat.vmdk hasnt been used since Jan 25 * Exchange_1 was modifiled Jan 30 and you have a snapshot on that machine, thats why you have these files, Exchange_1-000001-ctk.vmdk,Exchange_1-000001-delta.vmdk,Exchange_1-000001.vmdk.
extract the bundle and import the zip file enic-2.1.2.62-esx55-offline_bundle-2340678.zip to VUM same for the fnic package
U cant enable SNMP on ESXi because the full SNMP package was removed due to the removment of the "Red Hat" management cosnole the old ESX installation howerver have the full package. ESXi can... See more...
U cant enable SNMP on ESXi because the full SNMP package was removed due to the removment of the "Red Hat" management cosnole the old ESX installation howerver have the full package. ESXi can send snmp traps though