VMware Cloud Community
GMarsh
Contributor
Contributor

SAN allocation; split or keep one large LUN?

we are designing a new cluster and the conversation came around to the SAN connections. the idea suggested is to have one LUN strictly for VM OS's and another LUN for Data ie; File share data, SQL database and log files, etc.

There is an arguement from another engineer that we should have it all on one LUN, because splitting them will make one LUN very busy with Data writing and the other seem wasted.

Which do you think is the better model or best practice? Split the LUN's and designate one for OS and the other Data? OR one large LUN and just create the .vmdk files for disk?

Reply
0 Kudos
23 Replies
msemon1
Expert
Expert

I should have been more specific. We have three clusters

Production

Test and Development

Email servers

So the largest cluster is actually only ten. We do not split out VMDK's between datastores which is good for backup, however, we are looking at DR for these servers. With SRM do you need to split the VMDk's on storage or can you leave them together?

Thanks,

Mike

Reply
0 Kudos
Ken_Cline
Champion
Champion

I should have been more specific. We have three clusters

Production

Test and Development

Email servers

So the largest cluster is actually only ten.

OK, but you said that all hosts see all LUNs, so from a storage point of view, you effectively have one cluster.

We do not split out VMDK's between datastores which is good for backup, however, we are looking at DR for these servers. With SRM do you need to split the VMDk's on storage or can you leave them together?

With SRM you rely on your storage vendor's replication technology. Since most replication is done at the LUN level, splitting .vmdk's across multiple LUNs adds quite a bit of complexity and the potential of having the LUNs being out of sync (since they're independent replication units). If you plan to use SRM (or replication in general), I would recommend keeping the .vmdk's together on the same LUN.

Ken Cline

VMware vExpert 2009

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
Reply
0 Kudos
markdean
Enthusiast
Enthusiast

Well, yes and no on the size matters for a LUN. Remember, a SCSI reservation locks the LUN not the disk file so while it is locked, no other host can access it. So if you have only a few VMs with large disk files, then a large LUN size is not going to be a problem. But if you have a bunch of 10 or 20GB disk files on a 1TB LUN, well you may start experiencing some I/O problems depending on what you are doing, due to excessive reservations. To your point, it is only one thing to consider, but you certainly don't want to "throw it out" as "pointless"; it will bite you if you make the LUN too large with a large number of VMs with small disk files. Especially is that so when powering on a large number of VDI VMs (or any VM for that matter) at one time.

Mark Dean

VM Computing

Mark Dean VM Computing
Reply
0 Kudos
Brucealeg
Enthusiast
Enthusiast

Ok,

I took some notes and compared them to what I know and what I think I know. It left me with some questions.

1) I get that a bunch of 10GB WinXP vmdks on a large lun might be bad for IO. I do it in 100GB increments to limit the amount of LUNs and keep managibility sane. I assume the scsi locks come into place if you have Vmotion VDI VM's to different hosts, but they are on the same LUN. IE - two hosts accessing the same LUN. My feeling was SCSI locks would be minimum using 10vm x 10GB = 110GB LUN (+10 for snapshots etc). Does that seem logical?

2) Swap files. In 2 you had a swap volume and in 3 you can have a swap area or keep it with the LUN. is there a best practice for this? I assumed keeping swap on local disks might be good as that IO is mosty unused.

3) Debate on how many VM's to a LUN. Again, I assume the most SCSI locks will occur when VM's sharing a LUN are on different hosts. In our 2.x days we kept all VMDK on there own LUN and it is a management nightmare, plus we had heap memory issues with ESX 2 managing much stuff. We'd run out of heap memory long before ssytem memory, cpu, io etc etc. Is that a issue at all in 3? And, is the consensus now at the very least 1 to 1 vm to lun is probably bad in this day and age? I assume we want to at least do 1 VM (plus all vmdk) per LUN? I always had concerns that 1 vmdk to lun meant that VM's were having to going different paths to get to the data and this might be bad. I have considered making 200-400GB LUNs and adding multiple VM's depanding on wordload - but i want to be careful of performance and SCSI lock issues.

I know I've digressed as some of this was discussed. I get the performance issues of IO's (storage group total carved into LUN and used by VM's) and the need to balance VM io use to what the resource has. I guess I am more concerned with SAN limits, scsi locks and the behind the scene things that will cause issues or drops in performance.

Thanks for the discussion, great thread.

Reply
0 Kudos