RenaudL's Posts

Yes, Vmxnet3 nics do not work with FT yet. Vmxnet3 support for FT will come in a future release.
We successfully reproduced the issue in-house, we identified the problem, a fix will be issued in a future patch or update release. Thanks again.
Thanks for the report, we're looking into it.
RenaudL have you (or anyone else) been able to reproduce this? Is there a bug tracker I can file to or track this at? I'm pretty busy with other issues at the moment, but be assured that ... See more...
RenaudL have you (or anyone else) been able to reproduce this? Is there a bug tracker I can file to or track this at? I'm pretty busy with other issues at the moment, but be assured that this is in my queue.
Thanks for performing these experiments, that's exactly what I would have suggested. Just to summarize, can you confirm this is what you're noticing? - With e1000, neither pxe-tftp nor a Linu... See more...
Thanks for performing these experiments, that's exactly what I would have suggested. Just to summarize, can you confirm this is what you're noticing? - With e1000, neither pxe-tftp nor a Linux tftp client/server can communicate. - With vmxnet3, pxe-tfp doesn't work but a tftp client/server can.
This is strange... What about running both the tftp client & server in 2 VMs on the same host/vSwitch?
- Is an individual VM throttled to 1Gbps? I get throughput to an individual VM at ~1Gbps. But, if I connect to more than one, all connected via the same 10Gig Ethernet NIC, I get several Gbp... See more...
- Is an individual VM throttled to 1Gbps? I get throughput to an individual VM at ~1Gbps. But, if I connect to more than one, all connected via the same 10Gig Ethernet NIC, I get several Gbps. Is there a rate limiting going on (No, I don't have any QoS defined in the vSwitch). No, the VMs can push as fast as they have CPU for (unless you use traffic shaping, obviously). The link speed shown in a guest is a dummy one. - In ESX3.5, you had to manually configure NetQueue, is it enabled by default in vSphere, or do I need to manually enable it? Can I view the status of the NIC features anywhere? it should be on by default I believe. - Similar to the NetQueue enabling, is there a process for enabling Jumbo Frames? Change the MTU of the vSwitch, change the MTU of the guest, and you're good to go. Any other tips or documents for a 10Gig setup? Not really, except that we recommend using paravirtualized adapters like Enhanced Vmxnet or Vmxnet3 for maximum performance. We can easily drive mutliple Gb/s with a single VM with our setups, so there's probably a lot of room for improvement in yours.
I was thinking you could do this when you add a portgroup. Apparently it's only after it's been created. Then you can do esxcfg-vswitch -p VMkernel -m 9000 vSwitch1 to change it for th... See more...
I was thinking you could do this when you add a portgroup. Apparently it's only after it's been created. Then you can do esxcfg-vswitch -p VMkernel -m 9000 vSwitch1 to change it for the portgroup. I should have checked that first. The MTU is not a portgroup property; it's a vSwitch property. In your command esxcfg-vswitch ignores the portgroup parameter and actually applies the MTU setting to the vSwitch as a whole.
In a vSphere FT cluster, if one of esx is lan isolated from the other...what will be the cluster behavior??? Which server has priority, primary or secondary? What happens is that both VM... See more...
In a vSphere FT cluster, if one of esx is lan isolated from the other...what will be the cluster behavior??? Which server has priority, primary or secondary? What happens is that both VMs try to take control as they both consider the other one dead. Then they both race to lock a special file on the shared volume they reside on, an operation which is guaranteed to be atomic. Only one VM wins this race, becomes the primary, and the losing VM commits suicide. To summarize it, you can't predict which VM survives, however FT guarantees you will never end up in what we call a "split brain" situation where you have 2 primaries of the same VM running.
Hmmm, it sounds more like a general configuration issue than an issue with the device itself. Did you check all the settings inside and outside the guests?
Disclaimer: I directly worked on Vmxnet3, so I'm probably biased. I would recommend using Vmxnet3. More than just having the latest bells and whistles, its overhead is also smaller than e1000 ... See more...
Disclaimer: I directly worked on Vmxnet3, so I'm probably biased. I would recommend using Vmxnet3. More than just having the latest bells and whistles, its overhead is also smaller than e1000 (and therefore its performance is better) and is future-proof as new virtualization enhancements will continuously be implemented on top of it. The device has been intensively tested for months and the drivers we provide are of the highest quality. I understand the reluctance to use a whole new device, but you won't be disappointed if you give it a try.
Please note that on ESXi, "ping" and "vmkping" are actually the same binary, and both go through the VMkernel stack, so using one or another yields the same results.
Support for 100 Mbits nics was indeed dropped.
Hi, This is a well-known issue with the 3.5 Netflow exporter. The problem lies in the design of ESX's vSwitches which don't have true/static virtual port identifiers. The exporter therefore us... See more...
Hi, This is a well-known issue with the 3.5 Netflow exporter. The problem lies in the design of ESX's vSwitches which don't have true/static virtual port identifiers. The exporter therefore uses the portIDs of the relevant ports, but these values unfortunately can't be easily mapped back to the precise user of the virtual port. This is the main reason the feature is only experimental -- we didn't find a way to design it up to VMware's standards because of the protocol's limitations. I'd be happy to take any feedback on how to improve it.
Which virtual adapter are you using? You need an Enhanced Vmxnet adapter to enable Jumbo Frames.
The only limitation is that SCP uses more CPU for encryption of the data. On ESXi, it hurts here more due to limitations of busybox and scheduling. Dunno whether this is true. The bigge... See more...
The only limitation is that SCP uses more CPU for encryption of the data. On ESXi, it hurts here more due to limitations of busybox and scheduling. Dunno whether this is true. The biggest problem -- by far -- with the SCP protocol is its limited transfer window.
I can certainly see why such options would be enforced but if this was the case a physical switch with multiple machines should suffer same issues right ? We're talking about the policy of... See more...
I can certainly see why such options would be enforced but if this was the case a physical switch with multiple machines should suffer same issues right ? We're talking about the policy of the switch's port, not of the switch as a whole.
I have to concur with Lightbulb: I would take a close look at the frontal switch.
That is the best part. They both connect (one at a time) !! so which ever VM is turned on first gets the IP (whehter DHCP or static) ! and the other is left behind in local access only. In ord... See more...
That is the best part. They both connect (one at a time) !! so which ever VM is turned on first gets the IP (whehter DHCP or static) ! and the other is left behind in local access only. In order for the other VM to get an IP that can route to the outside world the Host has to be rebooted. Damn, I was suspecting an old bug of ours but the symptoms do not match... What if you boot up both VMs, then power off the one with networking connectivity? Can the other VM then reach the network? Have you checked for a MAC address collision?
The regular E1000 on the Win2008 Server Vm.. and 'flexible' on winXP VM.. I tried it with and without <span class="jive-thread-reply-body-container">promiscuous mode without any luck. Tha... See more...
The regular E1000 on the Win2008 Server Vm.. and 'flexible' on winXP VM.. I tried it with and without <span class="jive-thread-reply-body-container">promiscuous mode without any luck. Thanks for the information. Which one of these VMs is getting connectivity?