I will have 4 blades in my dell m1000 chassis and I need for some of my guests to have access to our dmz, which means they have to be physically connected through a port on our pix firewall. Can I/ Should I connect each blade to my dmz switch, and then create a vdistributed dmz switch, uplink the switch and the network card, and connect my dmz guests to that switch? Would this give my dmz guests the ability to vmotion across my cluster or will I be forced to use one blade as a dmz blade for my dmz guests?
Also, if there are any recommendations on best practices.
BTW, None of the equipment has been delivered yet or I would have just tried this scenario.
For network security reasons, we have a separate cluster assigned for DMZ with all nodes of ESX base with SC, vmkernel and VMs networking under DMZ networks. But, if the security policy allows you to move like this, then u can configure a vSwitch with DMZ vmnics attached for some specific VMs to connect with. This is because the SC or vmkernel interface has to never connect to guest OS with LAN connectivity.
Also, in this scenario, u shud be able to do vmotion between ESX hosts of these cluster, if you have a exact vSwitch/ portgroup configured on the other ESX base as well. Its because during vmotion the vmkernel interface has to communicate with the other vmkernel, and your SC and vmkernel would be normal production networks.
(Preparing for VCP 4)