VMware Cloud Community
beriapl
Contributor
Contributor
Jump to solution

Change MTU to JumboFrame for vSan

Hello,

I've got a small question about changing MTU on living 10 nodes vSan cluster.

I've network side prepared (changed) already MTU for vMotion. I've added an interface to separate vSan witness traffic (since it is on different settings and kept TMU1500 fro vsan witness traffic).

Now I need to change vmkernel for vSan from 1500 to 9000 MTU. I have some small doubts - for example if I will in the middle f that change and let's say I will have 5 hosts already with MTU9000 and another half still on MTU1500 - is it asking for trouble? Or vSan is somehow intelligent and it will be not sending frames with 9000 until all nodes in the cluster will have a jumbo frame set on vmkernel?

I can't migrate out VMs from vSan cluster. 

Labels (2)
Reply
0 Kudos
1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

@beriapl 

"I am pretty sure you will get in trouble."
This. While it may not cause cluster partition (this would only occur on leave/rejoin attempt) the performance could plummet and VMs wouldn't like that.

 

"Well, I've changed vDS MTU to 9000 already. Nothing major happened during that change."
Yes, that is completely expected as it is still operating at 1500 MTU as the vmk is the end-point and thus dictates the packet size (though yes of course it does need to be configured correctly at 9000 at everything in between the vmks on each node).

 

"just creating a new interface with a different MTU. So there will be again the same moment when I will swap 5 hosts to new vmk while another half will still remain on old MTU."
I think @CyberNils  was maybe thinking what I was going to suggest:
1. Add new vsan-tagged vmk to all nodes in the cluster in new subnet and with 9000 MTU - it *shouldn't* switch over to these automatically e.g. it will still use the original ones.
2. Place a node in Maintenance Mode and then untag vsan-traffic on the original vmk, validate that the node doesn't become partitioned from the cluster (e.g. using the new vmk to communicate with nodes).
3. Proceed with this until all nodes are just using the new vmks and remove the old ones.

 

That being said, network changes are not something that should be done on the fly, just because something is technically feasible doesn't make it a good idea, if you can get a short downtime or quiet time of day/week to do this then do that - there is just too much scope for error and/or after-the-fact realisations of things that were missed or incorrect assumptions about the configuration, trust this coming from someone that helps clean up the aftermath of such changes more frequently than would be preferable.

View solution in original post

3 Replies
CyberNils
Hot Shot
Hot Shot
Jump to solution

I am pretty sure you will get in trouble. I have always shut down every VM while doing this, and if I recall correctly we get some network outages between the nodes. You will also have to increase the MTU on your VDS which also cause 20-40 sec outage.

Maybe this KB can help you?

https://kb.vmware.com/s/article/76162

 



Nils Kristiansen
https://cybernils.net/
beriapl
Contributor
Contributor
Jump to solution

Thanks for the quick response

Well, I've changed vDS MTU to 9000 already. Nothing major happened during that change.

So basically I read that KB earlier - but for me, it is "almost" the same - just creating a new interface with a different MTU. So there will be again the same moment when I will swap 5 hosts to new vmk while another half will still remain on old MTU.

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

@beriapl 

"I am pretty sure you will get in trouble."
This. While it may not cause cluster partition (this would only occur on leave/rejoin attempt) the performance could plummet and VMs wouldn't like that.

 

"Well, I've changed vDS MTU to 9000 already. Nothing major happened during that change."
Yes, that is completely expected as it is still operating at 1500 MTU as the vmk is the end-point and thus dictates the packet size (though yes of course it does need to be configured correctly at 9000 at everything in between the vmks on each node).

 

"just creating a new interface with a different MTU. So there will be again the same moment when I will swap 5 hosts to new vmk while another half will still remain on old MTU."
I think @CyberNils  was maybe thinking what I was going to suggest:
1. Add new vsan-tagged vmk to all nodes in the cluster in new subnet and with 9000 MTU - it *shouldn't* switch over to these automatically e.g. it will still use the original ones.
2. Place a node in Maintenance Mode and then untag vsan-traffic on the original vmk, validate that the node doesn't become partitioned from the cluster (e.g. using the new vmk to communicate with nodes).
3. Proceed with this until all nodes are just using the new vmks and remove the old ones.

 

That being said, network changes are not something that should be done on the fly, just because something is technically feasible doesn't make it a good idea, if you can get a short downtime or quiet time of day/week to do this then do that - there is just too much scope for error and/or after-the-fact realisations of things that were missed or incorrect assumptions about the configuration, trust this coming from someone that helps clean up the aftermath of such changes more frequently than would be preferable.