MikeWright1971
Contributor
Contributor

VMware Infrastructure Design

Hiya

After having meetings with our network team I have come up with this design. They proposed I use 4 switches as shown in the diagram. But as the storage network is isolated from the VM network, How would I connect an ISCI device directly to a VM. All our storage devices go on the storage network?

Is this the way to go or is something wrong?

warrendesign1.png

cheers Mike

0 Kudos
5 Replies
bayupw
Leadership
Leadership

Just to clarify, are you using 2x 1G NIC & 4x 10G NIC?

Do the VMs need access the iSCSI storage directly or it will be just ESXi accessing iSCSI storage as datastore?

If the VMs need direct access to datastore, you can create a 2nd virtual network adapter on every VMs and connect it to the iSCSI network.

When using 2nd NIC, you might want to secure the iSCSI network to make sure that the iSCSI network is not being used as a backdoor network and avoid VM accessing other VMs from the iSCSI network

Bayu Wibowo | VCIX6-DCV/NV
Author of VMware NSX Cookbook http://bit.ly/NSXCookbook
https://github.com/bayupw/PowerNSX-Scripts
https://nz.linkedin.com/in/bayupw | twitter @bayupw
0 Kudos
MikeWright1971
Contributor
Contributor

Hiya,

There was a mistake in the last design. The VM VLAN isn’t a trunk (just a single VLAN)

We are going to use VST for the tagging. All except the MGMT & VM VLANs are on private network.

The VMs will see the other VLANs on the corporate network by routing at the switches.

But this Storage network is isolated and our other storage arrays in the company(on separate VLAN are not routable from the server network. The existing physical servers use an additional NIC to connect directly to the existing storage  arrays.

As a result VMs in this design wont be able to access the storage array in this design as isolated on another vSwitch and they wont be able to connect to the existing storage arrays in the company as they wont be routable.


ps We are only going to have 3 Hosts and 25 VMs MAX.

Modified design I sent you to show single VLAN for VMs:warrendesign1.png

The design I prefer, with 2 switches; The Storage would be placed on same Subnet(VLAN) as existing storage.

And VMs will be able to see storage for direct ISCSI if needed, as would be routable on company network.myfinal.png

Yes I’m using a mixture of 1gb & 10gb NICs, what are your thoughts?

Cheers Mike

0 Kudos
bayupw
Leadership
Leadership

You can still keep that setup and create 2 NICs on the VMs (just like the existing physical servers with additional NIC) and use VST.

I have added a simple VM to PortGroup to vSwitch/Distributed Switch mapping as below

With separate uplink, it would normally more easier/managable to create separate vSS/vDS one for the routed/corporate network (green in below diagram) and one for the isolated VLANs (blue in below diagram) - or you can even create another vSS/vDS for another isolated VLANs, separating vMotion and FT+iSCSI like my example below three vSS/vDS

You can then create a PortGroup with VLAN 5 on green vSS/vDS and a PortGroup with VLAN 5 on blue vSS/vDS.

The VM will have 2 NICs (no VLAN tagging on the VM), one connected to VLAN 5 and one connected to VLAN 4.

The VM will have 2 IP address, one IP for the routed network and one for the isolated network on VLAN 4 directly connected to the iSCSI network so there is no need to add route/static routes.

networkdesign3.PNG

This way, your iSCSI will be isolated which will reduce security and performance/latency risk on iSCSI

iSCSI storage networks: Full separation or not? - TechRepublic

"Dedicated VLAN and fully routed — This configuration would assign a dedicated VLAN over existing network gear, and it would be routed as any other VLAN in the environment.

This configuration would permit other traffic to potentially access the network, and it may increase the risk of latency"

Bayu Wibowo | VCIX6-DCV/NV
Author of VMware NSX Cookbook http://bit.ly/NSXCookbook
https://github.com/bayupw/PowerNSX-Scripts
https://nz.linkedin.com/in/bayupw | twitter @bayupw
0 Kudos
MikeWright1971
Contributor
Contributor

Hiya

Thanks for your help!

Is this supported on Standard Switches. We are only using Essentials Plus which doesn't support distributed switches?

Another question; We are temporarily going to use a PS series Array by DELL for the SAN. It has 2 NICs per controller(there are 2 controllers). We therefore have 4 NICs, which are in redundant pairs. i.e if eth0 fails on controller 1 then this will failover to eth0 on controller 2. They Are 1gb NICs and I like to maximise bandwidth if possible(you can only have 2 active at one time).

The diagram below shows how DELL would configure the SAN network. We would be using MPIO from the ESXI HOST using port binding on two uplinks, but the recommendations for this are active/standby, which means we would only have one active connection at a point in time. How would we achieve the 2 active connections as shown in the diagram below?santop.png

cheers Mike

0 Kudos
MikeWright1971
Contributor
Contributor

Hiya Bayu,

Sorry I incorrectly read your post. You do refer to vSS Vsphere Standard Switch.

I have tested adding an addtional vNIC to another Port Group on a different vSwitch is this works.

I think I've sorted the issue with the Storage

cheers Mike

0 Kudos