Hello,
I am working on my first cluster design and would love some input from the senior wisdom as to how I am doing so far. The link is to a .jpg from Visio.
What I am trying to accomplish is:
1. Give plenty of bandwidth to the 30-60 VM's we will have.
2. Split the nics among 2 Cisco 3750 switches for resiliency.
I am sure there are other things I should be thinking about as well. That is part of the reason for starting this thread.
Eventually I would like to post a completed design for people to use if they have a similar situation.
Cheers,
R0v3r
Message was edited by: R0v3r - Updated design image
Give it, the HA cluster, bucket fulls of firstly RAM, then minimum 6 NICs per host (without iSCSI, NFS considerations) dual pathing and redundancy where you can - should be right.
Hi,
Good to see that you intent to use two switches for your design. Especially HA enviroments should have redundancy in my opinion. Furthermore, why do you have different networking connectivity to different hosts? It is always a best practice to connect all ESX hosts in exaclty the same manner. I would suggest to use a single vswitch in your design, and use all available NICs to team up, all having dot1q trunks assigned to them. Then specifiy preferred paths over the various links. Make sure to at least give VMotion its own link in the preferred paths. After that you could simply remove or add NICs as required. This gives you maximum flexibility and availability.
There are thousands of other perfectly valid configs, which will work very well also. Its just a setup I personally like best...
Thanks for the comments. I should have finished the diagram before I posted it. I cleaned it up this morning to better show what I am attempting to accomplish.
Each server has:
10 NICs
2 Quad port PCIe cards
Built-in Dual port
2 VMotion connections
1 to pSwitch A and 1 to pSwitch B
2 Service Console connections
1 to pSwitch A and 1 to pSwitch B
I spread the VM Network connections across the 3 physical I/O card connections (Built-in, 2 PCIe cards) for resiliency. Although I do not know enough about the server design yet to know if the PCIe cards should be further separated (example: put one on the riser card).
Hello,
10 NICs
2 Quad port PCIe cards
Built-in Dual port
2 VMotion connections
1 to pSwitch A and 1 to pSwitch B
2 Service Console connections
1 to pSwitch A and 1 to pSwitch B
Use different vSwitchs for vMotion vs SC for added security and performance.
I spread the VM Network connections across the 3 physical I/O card connections (Built-in, 2 PCIe cards) for resiliency. Although I do not know enough about the server design yet to know if the PCIe cards should be further separated (example: put one on the riser card).
This works out quite well from a Security and Performance perspective, you will as a result have 5 vSwitches and that is a good thing. Using just one vSwitch would waste pNIC as more than 2 in a link for load balancing causes issues and the other 9 would be there for pure redundancy.
I like your network layout, Secure, High Performance, and clean.
Best regards,
Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074