rpmello
Enthusiast
Enthusiast

iSCSI/VMKernel Default Gateway in ESX 3.5

Jump to solution

I am in the process of trying to do some iSCSI testing for adding some additional storage to VMs. I'm trying to create an iSCSI connection and found in 3.5 that VMotion and iSCSI cannot be on the same VLAN. That's fine... but the issue is there is only 1 place to set the default gateway for VMKernel. If I put VMotion on VLAN 1 and iSCSI on VLAN 2, and the default gateway for VMKernel is for VLAN 1... how will iSCSI work at all?

There must be a very simple solution to this I am overlooking, but it is evading me right now.

Thanks.

0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership

Hello,

Consider iSCSI to be more point-to-point. You will give the iSCSI VMKernel device an IP that does not overlap vMotion but in the same IP range as the iSCSI Server. When you specify the iSCSI server specify it by IP. SO for example if the vMotion Ip is 172.16.3.4 and your gateway is 172.16.3.1 but your iSCSI IP is 192.168.132.4 and your iSCSI server is 192.168.132.1. The iSCSI will go out the appropriate interface. If however there is overlap it will try to go out the vMotion network.

If you have to route iSCSI then there are some other issues. After setting up your iSCSI vmkernel portgroup go to the SC and type in 'esxcfg-route -l' it will list out your routes. You would also use this to add the route to the gateway/router/firewall for the iSCSI portgroup if necessary.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIII: 2009-2021,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
7 Replies
Texiwill
Leadership
Leadership

Hello,

Consider iSCSI to be more point-to-point. You will give the iSCSI VMKernel device an IP that does not overlap vMotion but in the same IP range as the iSCSI Server. When you specify the iSCSI server specify it by IP. SO for example if the vMotion Ip is 172.16.3.4 and your gateway is 172.16.3.1 but your iSCSI IP is 192.168.132.4 and your iSCSI server is 192.168.132.1. The iSCSI will go out the appropriate interface. If however there is overlap it will try to go out the vMotion network.

If you have to route iSCSI then there are some other issues. After setting up your iSCSI vmkernel portgroup go to the SC and type in 'esxcfg-route -l' it will list out your routes. You would also use this to add the route to the gateway/router/firewall for the iSCSI portgroup if necessary.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIII: 2009-2021,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
rpmello
Enthusiast
Enthusiast

That makes sense... I am not much of a networking expert so I had not considered that the iSCSI traffic would not need to be routed at all in my environment.

Thanks.

0 Kudos
Erik_Zandboer
Expert
Expert

Wait a minute!

Do not try to put both Vmotion and iSCSI on the same VLAN, especially not on the same pNICs!! Vmotion will have serious impact on iSCSI performance, since VMotions generate a lot of network load when a VMotion occurs.

The solution: Keep them apart. Please, always DO make separate networks for iSCSI and VMotion. You can accomplish this by creating two vSwitches, both containing a VMkernel network. The one used for VMotion should have the "use for vmotion"-option set, the other should not.Once again, NEVER mix them.

How to go about the single gateway you might still wonder. The solution is simple: For both VMotion and iSCSI, you never want to leave your subnet. Leaving your subnet means routing traffic, something you do not want on iSCSI or VMotion traffic. And since the gateway is used only for "inter-subnet" communications, you basically do not even need a gateway. The only one you MIGHT even route, is iSCSI. VMotion is always local, so use one single subnet for all ESX servers. So the simplest solution: set the gateway to match the iSCSI network. Configure vmotion to use another subnet, and other pNICs. This effectively keeps them apart, without any issues with having only one gateway.

Visit my blog at http://www.vmdamentals.com
0 Kudos
ucfvmware
Contributor
Contributor

Not sure if you are familiar with ESX 3.5, but it actually does not even allow you to put iSCSI and VMotion on the same VLAN. I actually did say I was not doing this in my original post though...

0 Kudos
Erik_Zandboer
Expert
Expert

Hi,

I was referring to the reply by Texiwill. He recommends to use a single subnet for both. I would not second that, unless you are able to separate the traffic physically. But why go that way if can neatly split it up in two separate subnets/VLANs?

I am very familiar with ESX 3.5, it DOES allow you to use a single VMkernel network for both VMotion and iSCSI without issues, all in access-mode or all in one single VLAN. When using the iSCSI software initiator, you must remember to add a second service console to that vSwitch, because the iSCSI initiator runs from a service console (another reason not to route iSCSI traffic).

Visit my blog at http://www.vmdamentals.com
0 Kudos
Texiwill
Leadership
Leadership

Hello,

I absolutely do not recommend a single subnet for both. I stated they should NOT overlap. If they do overlap I explained that everything will go out the vmotion network regardless of any other setting. It is much safer to use two distinct networks and NOT overlap iSCSI/vMotion.

When using iSCSI w/o iSCSI-HBAS and you have vMotion capability you really want at least 8 pNICs for full redundancy, security, and capability. That implies at least 4 networks involved that DO NOT overlap in anyway, else you end up with some pretty interesting networking head-aches. In addition the SC port must participate somehow in the iSCSI network. Whether that is by using another SC port, or using a router, etc. There are many ways to dress this particular penguin. Since the authentication over the SC is so low bandwidth I rather setup a single route for the specific traffic via a firewall for the iSCSI auth traffic, but others will place the SC on the iSCSI network. Others will use a router...


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIII: 2009-2021,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
mbell98
Contributor
Contributor

This is a very interesting thread and brings up some questions. First it seems that there are a couple of different issues being addressed in the thread. The first is the question of whether VMotion/iSCSI/Service Console should be on the same subnet. The second is whether they should have different physical paths. It seems to me that the question of utilization would only have an impact if you're talking about separate physical paths. If I've got a single trunked Ethernet connection, I can put everything on separate subnets but still be limited by the Ethernet connection.

Here is my own setup: I have four ESX servers (three are blades) with two NICs each and both of those NICs are in vSwitch0. vSwitch0 is set to load balance (ip hash) across both of those NICs. My VMkernel and service console are both on the same subnet and my iSCSI SAN is on a different subnet through a layer 3 switch. From the blade chassis switch, I have bonded GigE connections to the core switch that my SAN ports are connected to.

Ultimately, the two physical NICs on the servers and the bonded GigE connections between the blade chassis and core switch would be the limiting factor for utilization no matter which subnet/VLAN my VMkernel and service console are placed on.

What I do not understand is exactly why the VMkernel and service console being on the same subnet is a problem. All I have seen is the "don't" do it but that doesn't explain exactly what problems it would cause. Any further explanation would be much appreciated!

0 Kudos