VMware Cloud Community
dickieblack
Contributor
Contributor

Sanity check for network plan please

Hi All,

I am working with fixed hardware and zero budget to create a production virtual environment and iSCSI storage solution. (After that, I'm aiming to sort out the middle east peace process and world poverty). As I'm finishing writing this, it seems to have gotten quite long and complicated, so please bear with me. Or give up now 🙂

My limited hardware (no budget or option to updgrade) is as follows:

one 4 socket Dell R900 with 4 x 1GB LOM and 2 x dual port Intel 1GB NICs as the primary host using ESXi 4.1;

one dual socket Dell 1950 with 2 x 1GB LOM and 1 x dual port Intel 1GB NIC as a backup host using ESXi 4.1;

two Dell 2900s with 2 x 1GB LOM and 1 x dual port Intel 1GB NIC for an iSCSI SAN using the free version of NexentaStor.

What I am hoping to do is run the R900 as the primary host, with the backup only to be used for vMotion-ing critical guests during maintenance. Since the primary has exactly double the pNIC count as the backup, I am expecting to lose a lot of the network redundancy on this host whilst it is use, but also expect this to be infrequent. hope this makes sense so far.

My network structure is as follows:

vSwitch1: 2 Production LANs, physically connected to set of stacked layer 3 switches A, separated by vlans

vSwitch2: DMZ connected physically to hardware firewall

vSwitch3: storage LAN, physically connected to set of stacked layer 3 switches B

I wish to use iSCSI to connect to my datastores, use vMotion and also to have NexentaStor present the LTO-3 tape drive in each of the Dell 2900s as an iSCSI target ( a separate issue, but the documentation says it can do this) and connect these to a Windows 2008 guest to backup the guests and user data to tape using Symantec BackupExec.

To me, this sounds like a lot of potential storage traffic, especially if the backups are going to be dumping about 1.3TB of the SAN to tape across iSCSI. Physically the storage network comprises of 4 Netgear GS724TS stacked  smart switches. Hopefully sufficient since again there is no budget for upgrades.

My queries are to do with the different storage traffic and if/how it should be segregated.

  • Should I use one vSwitch for all storage traffic? Or one for vMotion, one for datastore iSCSI and one for backup iSCSI?

Most of what I have read suggests separate vSwitches. However, my thoughts were to use one vSwitch but several vmkernels and vlans so  that I could set up different active-standby groups amongst the the  pNICs and get the best redundancy. Are there disadvantages to this?

For instance, if I use only the Intel pNICs for storage, on the primary host I could have datastore iSCSI active on pNIC 0 & 1, with 2 (and 3) as standbys. vMotion could use pNIC 2 as active with 3 as standby and tape iSCSI could use pNIC 3 as active and 2 as standby. On the backup host, I could have datastore iSCSI use pNIC 0 as active with pNIC 1 as standby and have vMotion and tape iSCSI use pNIC 1 as active and pNIC 0 as standby.

  • Is this arrangement sensible? Backup to tape will run off-peak and vMotions would be limited to planned maintenance, so there should be enough bandwidth for those two networks to use the same pNIC if the backup host is being used.

  • Also, might I be better splitting the 4 stacked switches into two pairs to separate some of the storage traffic and prevent any contention issues at the pSwitch level?

Finally, I am aware of the glaring omission in this post - the management network! Due to the particular security arrangements of the production LANs, the management traffic runs on the same subnet AND vlan as the data traffic. This is unlikely to change. The production LAN only went to two subnets/vlans because we now use VoIP and the supplier insisted we segregate the voice traffic.

Thanks for getting this far! Any suggestions or comments are greatly appreciated,

Richard

0 Kudos
4 Replies
Josh26
Virtuoso
Virtuoso

Richard Black wrote:

.

  • Should I use one vSwitch for all storage traffic? Or one for vMotion, one for datastore iSCSI and one for backup iSCSI?

Finally, I am aware of the glaring omission in this post - the management network!

Hi,

On the first issue, vMotion traffic is not storage traffic. I would heavily recommend against placing it on the same vswitch/pnic as your iSCSI.

On the second, it may not be "best practice", but in reality this will be fine to share management with data. You can use a separate VLAN to provide network security without implementing new hardware.

dickieblack
Contributor
Contributor

Hi Josh26,

Thanks for that. As our data network at this location is relatively small,  the only IP addresses which would constitute a management network are the 4 server remote access cards, the switch management IP and the vmkernel for the two servers!

As for using a separate vSwitch for vMotion and storage, I see your point. The reason I was looking at using the same vSwitch was to allow redundancy on the backup host in case of pNIC failure. However, this may actually be irrelevant as on this host the dual port Intel pNIC I will be using seems much more likely to suffer a total failure than a single port failure.

I do want to keep vMotion traffic off the production network though (the only other currently physically separate network), so will be using the same  switch for physical connections for all three of these networks.

Would you definitely recommend against sharing a vSwitch for the vMotion network with the second iSCSI network (not the first one) which will only be used during backup to tape operations?

Richard

0 Kudos
AndreTheGiant
Immortal
Immortal

With good switches, you can also put vMotion on the iSCSI switches.

But use a different VLAN and different NIC for iSCSI and vMotion.

IMHO I agree that a better solution is have vMotion (and FT) on the core switches (not the iSCSI) and use VLAN to isolate the network.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
dickieblack
Contributor
Contributor

Andre,

Thanks very much for this suggestion. I didn't get a notification of the post though, so sorry for the delay in replying.

The production network is mostly out of my hands, although if I can get pointers to some VMware documents which give likely bandwidth useage for vMotion (and FT if we start to use it), I might be able to get those running on the production network switches. These are a brand new Alcatel Lucent stack, compared the what I will be using for storage which is a Netgear GS724TS stack. If not though, vMotion will be on a separate VLAN to iSCSI.

Do you have any thoughts on me separating the two iSCSI functions of backup-to-tape and datastore access? I have been advised that backing up to an iSCSI tape drive could impact on the datastore iSCSI connections if they share the same network. Is this correct? Would separating them into two VLANs on the same vSwitch and pSwitch be enough?

Thanks,

Richard

0 Kudos