RenaudL's Accepted Solutions

In a vSphere FT cluster, if one of esx is lan isolated from the other...what will be the cluster behavior??? Which server has priority, primary or secondary? What happens is that both VM... See more...
In a vSphere FT cluster, if one of esx is lan isolated from the other...what will be the cluster behavior??? Which server has priority, primary or secondary? What happens is that both VMs try to take control as they both consider the other one dead. Then they both race to lock a special file on the shared volume they reside on, an operation which is guaranteed to be atomic. Only one VM wins this race, becomes the primary, and the losing VM commits suicide. To summarize it, you can't predict which VM survives, however FT guarantees you will never end up in what we call a "split brain" situation where you have 2 primaries of the same VM running.
Disclaimer: I directly worked on Vmxnet3, so I'm probably biased. I would recommend using Vmxnet3. More than just having the latest bells and whistles, its overhead is also smaller than e1000 ... See more...
Disclaimer: I directly worked on Vmxnet3, so I'm probably biased. I would recommend using Vmxnet3. More than just having the latest bells and whistles, its overhead is also smaller than e1000 (and therefore its performance is better) and is future-proof as new virtualization enhancements will continuously be implemented on top of it. The device has been intensively tested for months and the drivers we provide are of the highest quality. I understand the reluctance to use a whole new device, but you won't be disappointed if you give it a try.
Hi, This is a well-known issue with the 3.5 Netflow exporter. The problem lies in the design of ESX's vSwitches which don't have true/static virtual port identifiers. The exporter therefore us... See more...
Hi, This is a well-known issue with the 3.5 Netflow exporter. The problem lies in the design of ESX's vSwitches which don't have true/static virtual port identifiers. The exporter therefore uses the portIDs of the relevant ports, but these values unfortunately can't be easily mapped back to the precise user of the virtual port. This is the main reason the feature is only experimental -- we didn't find a way to design it up to VMware's standards because of the protocol's limitations. I'd be happy to take any feedback on how to improve it.
This is strange, we made sure ESX 3.5 wouldn't crash in this situation. I remember doing the experiment by looping switches and observing ESX handle the storm without any major issue (we actually... See more...
This is strange, we made sure ESX 3.5 wouldn't crash in this situation. I remember doing the experiment by looping switches and observing ESX handle the storm without any major issue (we actually have a built-in mechanism to detect them). Do not hesitate contacting VMware support about this.
Try this: Given a physical interface named “ce0″ that will be associated with VLAN 500, the formula to create the interface would be: ce + (VLAN number * 1000 + instance number) So in... See more...
Try this: Given a physical interface named “ce0″ that will be associated with VLAN 500, the formula to create the interface would be: ce + (VLAN number * 1000 + instance number) So in the example above, you would use an interface named ce500000 to tell the host to process 802.1Q tagged Ethernet frames destined for VLAN 500 That's exactly what he's trying to do The command is good, he's trying to reach VLAN 91 on vmxnet1. Unfortunately our current vmxnet driver, which uses GLDv2, doesn't support VLANs An upgraded driver is in the pipe, however you'll have to wait a bit for it. In the meantime you could use an e1000 adapter, it'll do the job just fine here.
Same thing with NIC chipsets? Same thing with nic chipsets. ESX doesn't care about the precise models of the underlying nics in a team: they are driven in their own, separate layer and... See more...
Same thing with NIC chipsets? Same thing with nic chipsets. ESX doesn't care about the precise models of the underlying nics in a team: they are driven in their own, separate layer and ESX interacts with them through a completely generic interface. So as long as a device is capable of sending/receiving Ethernet frames, it can be plugged to a team. This is different from Windows, where load-balancing is (usually) implemented at the driver layer and thus requires homogeneous nics. The only minor issue you can expect from mixing nics in a team is a slight variation of performance. Because some nics might be faster than others, the networking performance of your VMs might vary depending on the teaming decision ESX makes while forwarding frames. I've seen people teaming Gig nics and 100Mbits nics together (and it should work fine as far as network connectivity is concerned) and wondering why their throughput was so bumpy Also, since I have ya, does the software iSCSI initiator support Gigabit connections in ESX 3.5? We were also told that 3.02 only did 100Mb. I'm no iSCSI expert, but I don't see why we would have any limitation on Gigabit links.
The virtual nics of the guest OSs will always appear to be 1G nics. Don't worry though, this doesn't influence the maximum speed you are able to reach at all: this is only an hint and in no wa... See more...
The virtual nics of the guest OSs will always appear to be 1G nics. Don't worry though, this doesn't influence the maximum speed you are able to reach at all: this is only an hint and in no way indicates the speed of the underlying ESX pnics plugged in the server.
AFAIK, even though we do not support it, NFS over Jumbos seems to work fine. I wasn't involved in this decision at all, so I don't know the precise reasons leading to its lack of support in ES... See more...
AFAIK, even though we do not support it, NFS over Jumbos seems to work fine. I wasn't involved in this decision at all, so I don't know the precise reasons leading to its lack of support in ESX 3.5. It may just be insufficient QA or scary data corruption issues, I don't know. I know some of the adventurous people around here tried it, maybe they can relate their experience.
You may call it play, but this PC and its twin were members of two nodes Win2K3 cluster (shared storage was connected through iSCSI) for more than 2 years. Though, it’s hard to tell now, was BCM... See more...
You may call it play, but this PC and its twin were members of two nodes Win2K3 cluster (shared storage was connected through iSCSI) for more than 2 years. Though, it’s hard to tell now, was BCM5703 really stressed or not. I guess you are implying this setup used JFs for all this time... Weird. Then cluster was replaced with never nodes and I got old ones to play with ESX. Anyway, short answer is "forget about it!", right? Basically, yes: we're just following Broadcom's recommendations here (they're the chipmaker after all), but I don't really know about the specificities of the problems that may arise when using JFs. BCM5703 are old chipsets, it is unlikely we'll ever change anything about them now. Another question, would it be a problem to use another dual NIC "Intel Corporation 8254NXX Gigabit Ethernet Controller (rev 03)" as vSwitch to provide iSCSI for few guests OS (Win2K3) with Jumbo Frames enabled? For both "play" and "not-heavy loaded production"? Thank you! Maybe Joke aside, I don't know every nic's supported features, so the best thing to do here would be to try it by yourself and see how it goes: if "esxcfg-vswitch -m XXXX" succeeds then you will be good to go.
It isn't possible (yet).
Only when I add the E1000 nic does the network work, but not with the vmxnet enhanced. You mean "change the virtual nic device to be an E1000 instead of vmxnet", right? So what else do I... See more...
Only when I add the E1000 nic does the network work, but not with the vmxnet enhanced. You mean "change the virtual nic device to be an E1000 instead of vmxnet", right? So what else do I need to do for activating the enhanced network? Can you post the output of "lspci -v" in your VM when you choose vmxnet? Can you provide the last few lines of "dmesg" right after loading the vmxnet driver? Thanks.
http://communities.vmware.com/thread/115083?tstart=0 The patch is in the pipe.
Something else that may help you, or at the very least impact your thinking: VM traffic is only kept internal to the vSwitch if it's on the same port group. As soon as it needs to traverse por... See more...
Something else that may help you, or at the very least impact your thinking: VM traffic is only kept internal to the vSwitch if it's on the same port group. As soon as it needs to traverse port groups, it's placed on the wire (yes, even if it's in the same VLAN.) Wrong: VM traffic is kept internal to the vswitch even if the conversation occurs on 2 separate portgroups. The traffic is put on the wire if the communicating VMs are located on 2 different vswitches though, because there is no interconnection between vswitches on ESX. I don't know why, but a lot of you guys seem to affirm the opposite. Is there a buggy doc somewhere we should be made aware of?
So even though I have load balancing set for Route Based on the Original Port ID at my vSwitch, it's not enabled for any of my VMs in the VM Network port group because the port group has Load Ba... See more...
So even though I have load balancing set for Route Based on the Original Port ID at my vSwitch, it's not enabled for any of my VMs in the VM Network port group because the port group has Load Balancing checkbox unchecked? Hmmm, my bad: you do have load-balancing enabled. The unchecked checkbox only means you are not overriding the vSwitch's settings and thus inheriting them. So what you see appearing greyed out is actually your current setting for the portgroup. If ever you'd want to apply a different policy on a portgroup, you just have to check the checkbox and force your desired policy for the particular portgroup you're editing the property of. I'm not that familiar with VC, I usually prefer using command-line tools, so bear with my tiny mistake Hope this helped.
This currently isn't possible, sorry. The list of supported nics is clearly defined in the HCL.
Is there a way of throttling the amount of network bandwidth that an individual VM can consume? I see I can do this on a port group or vSwitch level, but does that shape the traffic for indi... See more...
Is there a way of throttling the amount of network bandwidth that an individual VM can consume? I see I can do this on a port group or vSwitch level, but does that shape the traffic for individual streams of traffic leaving the group/switch or does it treat all the traffic from the VMs connected to it as one stream - therefore in my case flooding out lower I/O VMs. I don't quite follow your thoughts here, but let me try to explain shaping in a simpler way: When setting shaping parameters, you are shaping traffic going from a VM to the virtual switch. That's it. This means: - internal flows (VM-to-VM) will get shaped because traffic from the source VM is shaped. - external flows (VM-to-uplink) will get shaped because traffic from the source VM is shaped. Shaping never occurs on the uplinks: they always perform at their maximum capacity.
So when the is used, ESX will balance the each VM's nic onto either nic0 or nic1 correct? Correct. So the when the term is used we are talking about the vm's mac correct? Well, wha... See more...
So when the is used, ESX will balance the each VM's nic onto either nic0 or nic1 correct? Correct. So the when the term is used we are talking about the vm's mac correct? Well, whatever MAC is put in the "source" field of the ethernet header, which will likely be the MAC address the vnic is using.
These are parenthesis, not chevrons. An even simpler solution is to directly type "uptime"