In theory it should be but all the factors need to be considered. When communicating between two vm's on the same vSwitch the traffic is being transferred via the CPU. Depending on the configuration of the ESX server and the vm's running on it the performance could be good or it could be bad. If the server is heavily loaded with vm's and has high cpu utilization you can bet it will affect network performance. There are alot of factors to play in to this and each situation is unique so you really have to look at the performance of the overall server. A common mistake that causes high cpu ready times which will affect performance if giving all vm's multiple CPU's just because you can. This is a great was to waste resource and create contention on the server thus creating poor performance between hosts communicating over the vSwitch. There are too many scenarios but you get the idea.
If you found this helpful remember to assign points.
I didn't really read through the thread you linked to before I posted my response, but just finished going through it and that's some interesting stuff.....I have to say their ae some die hard people out there with alot of time on their hands testing this stuff.....if anyone has spare time please share I could use some.....
if anyone has spare time please share I could use some.....
that most likely depends on the amount of money you are willing to spend
New Server 12K
New SAN 100K
New Core Switch $40K
Spare Time "Priceless"
In threory yes, the internal only switch should be faster, however it is a software device that is sharing processor time with the guests, Hypervisor and the console.
Tom Howarth
VMware Communities User Moderator