VMware Cloud Community
halewoodj201110
Contributor
Contributor
Jump to solution

VMWare vSphere Hypervisor (ESXi) 5.0 Networking Question

Hi All,

This is my first post here, so please be gentle!

Basically, I have a question relating to possible ways to setup VMWare ESXi 5.0 on two machines that I'm putting in a datacentre. The machines run the free version of ESXi, so advanced features such as distributed switching are sadly out of the question.

Both machines have identical hardware with multiple NICs, but one is used to host services, and the other is basically used as a storage device for backups. I have two uplinks from the datacentre switch, and ideally want to be able to make the best use of them (i.e the combined bandwidth of both links being available to the main server, but with the second still having access to one of the uplinks).

So, would any of the following be suitable?

1) Connect one uplink directly into the first server, and have the other uplink go into a very basic layer 2 switch, which is in turn connected to the two servers.

2) Connect one uplink directly to each machine, and have a cross-over cable run between the two machines that is connected to additional NICs on the same vSwitch as the uplink on each machine.

Any ideas an opinions regarding this are much appreciated!

Thanks,

Jon

0 Kudos
1 Solution

Accepted Solutions
wdroush1
Hot Shot
Hot Shot
Jump to solution

halewoodj wrote:

So with this, you are suggesting that the "direct" uplink should be connected to a different vSwitch to the "crossover"/shared uplink? i.e. Each machine has two "public" vSwitches, presumably to avoid the introduction of a loop into the topology?

Correct, you'll have an "external" vSwitch to the datacenter, and an "internal" which is the crossover, you'll treat them as seperate networks, you'll have to configure all the VMs to talk between eachother on the internal network though (this is that "annoying management overhead" part where VMs will have direct datacenter access AND internal networks).

And no, I'm unlikely to be using 1Gbps in the datacentre, but it seems daft to have the second uplink sitting there doing nothing when its included in the agreement with the DC. Smiley Happy

Yeah, I still think the best way to use that would be a dual WAN firewall, you'll simplify setup greatly and you can get some for fairly cheap.

View solution in original post

0 Kudos
4 Replies
iw123
Commander
Commander
Jump to solution

2 connections isnt really enough to do it properly. You best solution really would be to connect both machines individually.

To get the benefits of both links for both machines you would need another switch in there - one that is capable of etherchannel, which it doesn;t sound as though your basic one will be able to do. This doesnt take into account any redundancy either, as there are single points of failure throughout.

*Please, don't forget the awarding points for "helpful" and/or "correct" answers
0 Kudos
wdroush1
Hot Shot
Hot Shot
Jump to solution

halewoodj wrote:

Hi All,

This is my first post here, so please be gentle!

Basically, I have a question relating to possible ways to setup VMWare ESXi 5.0 on two machines that I'm putting in a datacentre. The machines run the free version of ESXi, so advanced features such as distributed switching are sadly out of the question.

Both machines have identical hardware with multiple NICs, but one is used to host services, and the other is basically used as a storage device for backups. I have two uplinks from the datacentre switch, and ideally want to be able to make the best use of them (i.e the combined bandwidth of both links being available to the main server, but with the second still having access to one of the uplinks).

So, would any of the following be suitable?

1) Connect one uplink directly into the first server, and have the other uplink go into a very basic layer 2 switch, which is in turn connected to the two servers.

2) Connect one uplink directly to each machine, and have a cross-over cable run between the two machines that is connected to additional NICs on the same vSwitch as the uplink on each machine.

Any ideas an opinions regarding this are much appreciated!

Thanks,

Jon

I'd go with 2, though it is going to make virtual networking a bit of a mess, but with #1 you don't want the first server going down to cut off the second server (bleh). What would be best is a switch with link aggregation, but as stated before you may not be able to support that.

If you don't care at all about server 1 cutting off server 2, I'd go with #1 due to it's simplicity in network setup (being as you'll have to have your own subnet between the crossover link, and dual NICs on all your VMs and specific configs, kind of a pain).

Though to be honest, I'd probably bind each one to just one port and keep it that way until you get a better setup, buy a dual uplink router or something. I doubt you're planning on consuming 1gbps in the datacenter, right?

halewoodj201110
Contributor
Contributor
Jump to solution

So with this, you are suggesting that the "direct" uplink should be connected to a different vSwitch to the "crossover"/shared uplink? i.e. Each machine has two "public" vSwitches, presumably to avoid the introduction of a loop into the topology?

And no, I'm unlikely to be using 1Gbps in the datacentre, but it seems daft to have the second uplink sitting there doing nothing when its included in the agreement with the DC. Smiley Happy

With regards to the single point of failure situation that was mentioned, this is not an issue, as the services these two servers provide are protected from failure at the application level (the config and data is mirrored elsewhere).

Thanks,

Jon

0 Kudos
wdroush1
Hot Shot
Hot Shot
Jump to solution

halewoodj wrote:

So with this, you are suggesting that the "direct" uplink should be connected to a different vSwitch to the "crossover"/shared uplink? i.e. Each machine has two "public" vSwitches, presumably to avoid the introduction of a loop into the topology?

Correct, you'll have an "external" vSwitch to the datacenter, and an "internal" which is the crossover, you'll treat them as seperate networks, you'll have to configure all the VMs to talk between eachother on the internal network though (this is that "annoying management overhead" part where VMs will have direct datacenter access AND internal networks).

And no, I'm unlikely to be using 1Gbps in the datacentre, but it seems daft to have the second uplink sitting there doing nothing when its included in the agreement with the DC. Smiley Happy

Yeah, I still think the best way to use that would be a dual WAN firewall, you'll simplify setup greatly and you can get some for fairly cheap.

0 Kudos