Hi, my ESX (with vmotion + HA) owns 4 NIC, actually I have my console connected to 1 vswitch with 1 Nic and a second vswitch for VM with 3 Nics
What's the best practice to have a redondant console connection ?
1--> To create just one vswitch for console and vm ?
2--> To add a second nic to console vswitch and set 2 nic for VM
3 --> Create a second console linked to second vswitch
4--> To connect console on 2 vswitch in same time (if it's possible ???)
Regards
I kept my config simple:
SC - vSwitch vmnic0
VMotion - vSwitch1 vmnic1
VM Networks - vSwitch2 vmnic2, vmnic3
Although for redundancey you could team all the vswitches and make sure they are connected to different physical switches. Assuming you have the infrastructure to support this.
Message was edited by:
Mooihoek
Thanks but in your case, you have no security on service console. That means if a problem occurs with your vmnic0, HA stops your vm ...
Yes I know, not my ideal config. I try and team the SC when using HA if I have the infrastructure to support this.
Why not create a vswitch with SC and Vmotion to have 2 NIC ?
How many VM do you have on your switch ?
Hello,
On the couple of limited nic machines I have we do this:
SC and virtual machines: vSwitch0, pnic1, pnic2, pnic3
Vmotio:n vSwitch1, pnic4
\- we don't but on the above you could setup your vm port group(s) to use pnic1 and pnic2 and only have pnic 3 as standby. Then setup SC to use pnic 3 as active but the other one or two as standby.
I don't like mixing any other traffic with vmotion.
What is the debit of a gigabit card ? What 's the average number of vm for 1 nic gigabit ?
Regards
Typically I would have 12 VM's using two physical nics assigned to my VM switch
number of vms per nic is very dependant on your network load; there is no hard and fast rule.
If you can, measure your current throughput in the physical world and try to extrapolate to give you an estimate to work with.
We have servers that we get 16 virtual machines on one pnic and are not stressing the pipe.
My practice, normally using HP & Dell systems with 2 built-in Broadcoms:
bcm0, vswif (COS)
bcm1, vmotion
I usually add 2x Intel Pro/1000MT Quad Port NICs and:
\- NIC1, Port1 +NIC2, Port 1 are bound to vswitch1 (you can add more if needed)
\- NIC2, Port2 +NIC2, Port 2 are bound to switch 2
The point is to have a port from each pNIC on your vSwitch to remove single points of failure.
Additionally I bind and extra port from the Intel to Vswif (COS) on advice from a very respected Vmware tech. I've heard that binding 2 different types of NICs into a vswitch can cause problems, but I have yet to see any.
I also leave a "test" vswitch with a single port for crazy developer VMs and/or troubleshooting. This configuration has always worked well, 10 NICs per host seems high, but you'd be amazed how fast they get used up and the cost is really not that high.
Ummm.. this is the opposite of performance.
If you have a car, and you put nice tires on the rear, and crappy tires on the front, and it's front wheel drive, it makes no sense.
Same thing with NIC. We don't use the internal NICS at all, and you are putting 2 critical things on internal NICS, which don't have near the performance, and they sap the CPU during use.
I would swap at least make BCOM's VM switches (because believe it or not, the VM's aren't as high priority as the service console and VMotion). Intels are designed for speed and CPU offloading, so the system doesn't get hit while they endure sustained high speed.
You will find that your BCOMS will not be well suited for high traffic and they will be sporatic in performance, and they will have lots of peak and valleys, vs the INTEL which will perform much better.
If you want to use the internal NICS, at least make them failover from your VM switches, but don't use them for ESX high priority ports, like console, kernel, or VMotion. That is a mistake.
No debate, Gigabyte is REQUIRED for VM Motion.. So if that is in your plan you must use them.
Help me understand how 2 1GBPS Broadcoms that both HP and Dell build into their upper-end systems aren't "performance" NICs...
How does a COS NIC need more bandwidth than a NIC port carrying 5-6 VMs?
Depends on the card. Intel makes the best add-in NIC's.
Internal NIC's aren't really that bad, but I definately see a difference in performance, like as much as 30% improvement.
not only that, but the internal NIC's are part of the system board, and they will take CPU.
If you have 4 ports, one is the SC/Vmotion/Kernel (you aren't going to overload this one port, especially if it's a Gig port). The rest you make VM switches, 2 on 1 switch, and maybe make 1 failover, it's not really the complicated.
I have 4 segments, so I have 1 for SC/VMotion/Kernel and VM (the first 3 are very low priority, since they dont' really occur that often) and this is a low segment network anyway.
the other 3 NIC are 2 on 1 segment, and 1 another, this way ALL of my ESX servers are on 3 different Segments.
Also let's break this down. We are talking about how many VM's, 15 maybe 20?
Even on high use segments, we have 24 port switches on a 10-100 segment, and we never had an issue for performance, even when these were very basic, non-commercial switches, so the VM's won't need lots of bandwidth anyway.
The ESX server however NEEDS a 1gig port, high availability, high performance switch (NIC port) for it's uses.
In 25 years of support, I have had 1, just 1 NIc go bad, and as it turned out, the NIC wasn't bad, someone manhandled the cable, and the cable wouldn't quite stay in the port...
I have worked on more machines than I can count, and in all that time, NETWORK hasn't been any of my problem. ZERO NIC hardware failure.
Drivers is another issue...
Broadcom isn't INTEL. Broadcoms suck for performance, google broadcom if you don't believe me.
Test them yourself. Put an Intel on a 1gig port, put a broadcom on a 1 gig port.
Do FTP transfer watch the time it takes, Intel will win hands down EVERY time, and by a MAJOR difference.
HP / Dell include onboard video too, you think they are ready for games, uh.. no.
They are there for simple connectivity not HIGH performance connectivity, that's why they give you options for add-in boards.
They are in the business of making money, they aren't going to give you high end NICS on a system board.. they want to keep the machines as affordable as possible.
Intel is performance, broadcom is not. Plain and simple.
"How does a COS NIC need more bandwidth than a NIC port carrying 5-6 VMs? "
because that's where you are transferring files, doing Vmotion, why not make these the higher performing ports?
I want to know that when I connect to a machine that will be my BEST port. Also, I share my COS ports with my VM switch, since I am not doing it all the time, there isn't a real reason to make them dedicated either...
That's why I can leave the internal NICS off.
Hello,
There is a very good reason to make your SC port dedicated instead of sharing with your VMs and that is for purely security reasons. If you can gain access to the SC you have gained access to everything even with ESX v3.
The recommendation from a security perspective has always been to make the SC vSwitch dedicated to administration. If there are Administration VMs then they to can share this vSwitch.
With just 4 vNICs I would do:
1 for SC (dedicated to an administrative network shared with VC)
1 for vMotion (dedicated to its OWN VLAN/Physical Switch)
2 for VM Network (firewalled at the very least but available to the rest of the organization/DMZ/etc. Treat like a normal network)
Your SC should have a high bandwidth (GigE at least) as it is where you do most of your transfers, backups, and other administrivia. There is a religious debate over broadcom vs intel. Use which works for you and what you have available. In some cases you are stuck with what the system provides (Blades, builtins, etc)
The only caveat is that if you want to do NIC teaming, the NICs should be the same type and model. That way there is no possible issues.
You really do want your SC to be as secure as possible as it is literally the door to the virtual data center. A non-dedicated vSwitch gives a hacker even more attack points. You want to limit attack points to those you can easily monitor.
Best regards,
Edward
"There is a very good reason to make your SC port dedicated instead of sharing with your VMs and that is for purely security reasons. If you can gain access to the SC you have gained access to everything even with ESX v3. "
Well every environment is different, we have a development segment, which is restricted. Plust maybe for a paranoid universe, you can keep this separate, but unless your ESX is on the Internet, there won't be a problem with security. I don't think anyone on here has ESX servers that are public accessible, if they do, that would be a one off thing, not general use.
Generally speaking people won't give OUT the IP address of the SC anyway, only access to a VC Console server, or a VM, not the main connection to a server.
Besides, as I have done so many times in other posts, and as a long time Windows / Microsoft biggot, I have said repeatedly and on many occassions, Since Microsoft has all the security bulletins and Windows has soo many vulnerabilities, Linux is bullet proof!
There isn't a problem with hacking and security, if there is, then maybe we need to rethink using Linux then..huh? hahaha.. Yeah, if you answer this, then you give a point to Microsoft Users everywhere... Linux is way more secure.. at least that's what EVERYONE keeps insisting.
Keeping a NIC separate is bad practice, just like keeping one NIC on standby is bad practice, the only thing you should keep on standby is a hard drive. But everyone has a different method of doing things, and I know we will not all agree on one way to do things.
I gave an opinion, and you are giving yours. We can all appreciate the differences as this gives us more perspective, but I wouldn't ever follow your methodoligy, because its wasting resource. My method works fine, and since we can't afford to have stuff just sitting waiting in case something happens, that makes zero sense to me.
Especially since I researched the BEST hardware available, and drivers, and keep up with patches and drivers, there should NOT be a problem. If there is, that's what a 4 hour response time from Dell is for, fix the problem, before it becomes critical.
"Your SC should have a high bandwidth (GigE at least) as it is where you do most of your transfers, backups, and other administrivia. There is a religious debate over broadcom vs intel. Use which works for you and what you have available. In some cases you are stuck with what the system provides (Blades, builtins, etc) "
There is no debate, Broadcom is included as cheap solution, and up and running for a simple server. Intel is way better for performance, the debate is cost vs performance. If you don't think you need it, don't buy the add-in cards.
EVERY machine has add-in slots, if they don't, I can question the technical expertise in buying the machine in the first place. Buying a machine without expansion slots is like buying a TV without external connections, you are very limited in what you can do.
"The only caveat is that if you want to do NIC teaming, the NICs should be the same type and model. That way there is no possible issues. "
That's why I suggested FAILOVER and not TEAMING. you only use the internal NICs as a LAST resort. That's the way I build ALL my machines, we have seen the numbers, and numbers do not lie.
you buy cheap, you get cheap. You buy good, you pay more, but you get what you are willing to PAY for.
"You really do want your SC to be as secure as possible as it is literally the door to the virtual data center. A non-dedicated vSwitch gives a hacker even more attack points. You want to limit attack points to those you can easily monitor"
That's why we have firewalls, and restricted access....
\- "How does a COS NIC need more bandwidth than a NIC port carrying 5-6 VMs? "
because that's where you are transferring files, doing Vmotion, why not make these the higher performing ports? -
So it's the COS NIC doing the Vmotion and not the Vmotion NIC doing Vmotion?
I Googled Broadcom, their lack of performance links just don't jump out at you. Servers have video built-in, but servers aren't designed for high-performance video hence their video cards are not high-performance.
This post was supposed to be about various recommendations for the originator, not an argument over Intel vs. Broadcom. Besides, I have yet to see a single Fortune 500 shop that doesn't use built-in Broadcoms on their VI servers.
All Nics use some CPU unless you use TOE but ESX does not yet support that.
If a server has 4 nics i set it up like this:
NIC 1, team with nic 3, service console active / vmkernel standby
NIC 2, team with nic 4, vm network, if possible etherchannel
NIC 3, team with nic 1, vmkernel active / service console standby
NIC 4, team with nic 2, vm network, if possible etherchannel
most of the times a 4 nic setup has at least 2 different physical nics (on board / pci), and I always use a onboard and pci nic in 1 team.
