VMware Cloud Community
vmproteau
Enthusiast
Enthusiast
Jump to solution

EVA SAN Fiber Channel Disk - Theoretical Question

I wouldn't be concerned too much but, a client is asking.

Which of these disk groups would have better performance: Assume all drives within a group have identical data.

  • 16-300GB 15K Fiber Channel Drives

  • 16-450GB 15K Fiber Channel Drives

How would performance compare with identical disk operations. I'd assume the larger drive would technically be a bit slower but, I was curious if anyone knew for sure.

Reply
0 Kudos
1 Solution

Accepted Solutions
RParker
Immortal
Immortal
Jump to solution

Neither. It's not based on Fibre, those are SAS/SCSI drives still. both are 15K, and contrary to popular belief, there is no difference in speed between a 36G SAS/SCSI drive and a 400G SAS/SCSI, for one thing there are different technologies, 36G being older, of course the 400G will be faster, but only because more cache, better technology, etc..

But for your case, neither of those drives are going to be faster. Even though there may (and I use this loosely) be a slight performance gain (albeit extremely isignificant) you aren't going to be using them as individual drives anyway, they will be part of a RAID, so you definately will not see a difference in performance.

Go with the larger drives though, you will get more data for the price in the long run, divide up the space to give a mixture of space/performance, don't dedicate an entire Aggregrate/Drive cluster as ALL VMFS, divide part of it as SAN storage (Unix, NTFS) volumes, and share it with VMFS LUNS.

If you have 10 of them let's say in RAID 4 or RAID 5, you an have 4TB of space (RAW). Make 2 TB VMFS volumes, and 2TB of SAN Storage, that way you aren't consuming all the spindles to handle a LOT of load. Try to balance the drives as much as possible, to give you the most out of them.

View solution in original post

Reply
0 Kudos
3 Replies
mcowger
Immortal
Immortal
Jump to solution

In theory, larger drives have slightly better seek times (more data in less space), but once you are looking at 15K drives, that difference is minimal.

The biggest risk is that larger drives take longer to rebuild, so you are exposed during a disk failure for longer.

--Matt

--Matt VCDX #52 blog.cowger.us
RParker
Immortal
Immortal
Jump to solution

Neither. It's not based on Fibre, those are SAS/SCSI drives still. both are 15K, and contrary to popular belief, there is no difference in speed between a 36G SAS/SCSI drive and a 400G SAS/SCSI, for one thing there are different technologies, 36G being older, of course the 400G will be faster, but only because more cache, better technology, etc..

But for your case, neither of those drives are going to be faster. Even though there may (and I use this loosely) be a slight performance gain (albeit extremely isignificant) you aren't going to be using them as individual drives anyway, they will be part of a RAID, so you definately will not see a difference in performance.

Go with the larger drives though, you will get more data for the price in the long run, divide up the space to give a mixture of space/performance, don't dedicate an entire Aggregrate/Drive cluster as ALL VMFS, divide part of it as SAN storage (Unix, NTFS) volumes, and share it with VMFS LUNS.

If you have 10 of them let's say in RAID 4 or RAID 5, you an have 4TB of space (RAW). Make 2 TB VMFS volumes, and 2TB of SAN Storage, that way you aren't consuming all the spindles to handle a LOT of load. Try to balance the drives as much as possible, to give you the most out of them.

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

If you look at this Seagate document you'll see that the specs for the 300 GB and 450 GB are pretty much identical (i.e. seek times / latency / transfer rate / cache) and it's just a matter that the larger capacity model has a greater platter count. http://www.seagate.com/staticfiles/support/disc/manuals/enterprise/cheetah/15K.6/FC/100465943a.pdf The downside potentially for them is that down the road they may stick more I/O load on the server because it simply has more space to use.