VMware Horizon Community
defrogger
Enthusiast
Enthusiast
Jump to solution

View Storage Tiering

Hello, im a little new to VDI.  VDI is already in place and im taking over things.  Currently there is not that much deployed yet and we want to start using Storage Tiers

What we have

Vcenter 5.5

Hosts Esxi 5.5 (5 Hosts)

VMware View 6

Storage

NetApp Array - There is 3 Luns that are Flash SSD Storage, Each 1 TB.  Then there is two LUNs 500 Gigs each of 15k SAS Drives.  And three Luns 1 TB each of Sata Storage.  We also have plenty of Sata Storage we can add if need be, might be able to add more SAS is needed.

So ive been going over things and there not using the SAS or Sata Storage.  There only using the SSD storage which concerns me as we plan on adding more Pools and SSD is at a premium.

We are using Linked Clones and currently there are Two Pools.  Each pool is a Floating pool so no need to keep user data and they refresh at night.  One pool is only 15 Desktops and the other is 120 Desktops.

The two pools were kind of Proof of Concept pools and its actually being used for Production now. 

Going forward the plan is to add more pools, possibly another 400 VM's, as of right now the new VM's would be floating pools so no need to keep user data. 

I want to do Storage Tiering as there is no budget to buy more SSD storage. 

I know I can split the Storage so that the Replica's are on the SSD storage and OS and other files are on Slower Storage.

So now some questions

1. My understanding is Replica's should be on Tier 1 Storage (ssd), the OS and other files can be on slower storage, but should it be on SAS 15k drive storage or can it be on slower Sata storage? Hopefully Sata

2. Lets say I end up with 5 Pools.  Im assuming I can have more then one replica on the same LUN? or should I be splitting up my Luns per Replica?  This would mean I need to carve out my LUNs differently

3. There is a limit of roughly 128 VM's per LUN,   How does this work when introducing Storage Tiering?  Im assuming it doesn't matter, the Replica would be on the SSD LUN, and then the rest would go on the SATA or SAS LUN, and I could have up to 128 VM's on the Sata/SAS LUN?

4. The 128 VM's per LUN means I might need more LUN's?  Currently I do have 5, but not sure if we will use the SAS LUN's.  So I would need to add a couple more Sata LUNs? 

5. So best practice says 128 VM's per LUN.  Currently my LUN's are all from the same Raid Volume.  Does this matter?  is VMware saying to split the LUN's up because its assuming the Luns will be on separate Raid Volumes or just because of the VMFS file system itself?

Im going to be reading up allot more on VDI but I figured ide goto the forums first to see what people think

Thanks

Reply
0 Kudos
1 Solution

Accepted Solutions
JackMac4
Enthusiast
Enthusiast
Jump to solution

You don't need to split the luns per replica. I would put all your replicas on SSD, and yes they can share a LUN across many pools.

You'll just want to have multiple LUNs that the pool(s) can balance over. Create as many LUNs as you need to spread things out, but really think about spindles and IOPs. 128 is a guideline, but not a hard rule. It's all about having enough IOPs to serve the requests you'll get during peak use.

I would go with SAS if you can, but SATA is acceptable - it just all affects user experience. Consider using SATA for task workers or people that don't generate a lot of storage use. Trial and error and user acceptance would be good to go through here if you haven't yet.

---- Jack McMichael | Sr. Systems Engineer VMware End User Computing Contact me on Twitter @jackwmc4

View solution in original post

Reply
0 Kudos
3 Replies
JackMac4
Enthusiast
Enthusiast
Jump to solution

You don't need to split the luns per replica. I would put all your replicas on SSD, and yes they can share a LUN across many pools.

You'll just want to have multiple LUNs that the pool(s) can balance over. Create as many LUNs as you need to spread things out, but really think about spindles and IOPs. 128 is a guideline, but not a hard rule. It's all about having enough IOPs to serve the requests you'll get during peak use.

I would go with SAS if you can, but SATA is acceptable - it just all affects user experience. Consider using SATA for task workers or people that don't generate a lot of storage use. Trial and error and user acceptance would be good to go through here if you haven't yet.

---- Jack McMichael | Sr. Systems Engineer VMware End User Computing Contact me on Twitter @jackwmc4
Reply
0 Kudos
defrogger
Enthusiast
Enthusiast
Jump to solution

Thanks for the reply, ive been doing allot of reading over the last couple days and im starting to get a better understanding on how all this works.  Like you said im going to have to figure out what applications are going to be on the Desktops to see if we can get away with Sata. Hopefully we can.

Just to confirm something and from what I read this is correct.  Lets say I have 5 Pools of Linked Clones with roughly 400 Desktops.  I would then end up with 5 Replicas. 

So the plan would be to use Storage Tiering and put the Replicas on One SSD LUN.  Then I would have 4 LUNs that I can spread the Linked Clone Disks on.  So if a Lun can handle 128 VM's, then 4 Luns should do it.

So I only need to spread out the Linked Clone Disks across 4 Luns, The Replicas I can keep on one SSD LUN, is that correct?

Thanks Again

Reply
0 Kudos
defrogger
Enthusiast
Enthusiast
Jump to solution

Forgot to ask one more question.  Im not to good with figuring out how many IOPs and such.  I can tell you we have NetAPP Shelves with 24 Drives in them, plenty of Spindles. 

Currently the LUN's are all on the same Aggregate (Raid Group) which is on the same shelf.

Should I be using Luns from different Aggregates or that really doesn't matter?

Granted im not sure I have the option at the moment to use Luns from another Aggregate, would have to look into.  Actually I do know the SAS LUNs are on different Aggregates then the SATA, but in the end we would prefer to use the SAS for something else.


Thanks

Reply
0 Kudos