I recently inherited 2 esx server (2.5.2 and 3.0.1) and it looks like they set them up with one virtual swith with the the sc and vm's going through that same cable. They say they are having alot of perfomance problems could that be related because they are not separating them?
Well I've never seen 21 vms all running over one 100 Mbit nic,
have you checked your /var/log/vmkernel for auto negotiation problems ?
How many vms here ?
1 Gb network ?
Maybe you can check the network statistics.
In my test lab have esx with one nic for all - it works.
Network is one of the last bottlenecks IMHO.
Ohh it is really bad. 10/100 half duplex
21 vm's
So I guess that could be a problem here - can you see any network statistics ?
the network usage is averaging 77 kbps.
Well I've never seen 21 vms all running over one 100 Mbit nic,
have you checked your /var/log/vmkernel for auto negotiation problems ?
Yes I did but thanks for the replies. It because a no brainer and we moved to gig switches.
Even going to just 100 Full Duplex vs getting new switches would probably have immediately solved all performance problems. Half Duplex?!?! It is only doing one thing at a time, send or receive!