VMware Cloud Community
Lifes_Gamble
Contributor
Contributor

ESX NetApp SATA Config

We're getting started on our VMware Vi3 implementation next week. We plan on using a NetApp system to host our VM's via iSCSI. I was wondering if anyone has a quick answer for me regarding a recommended config. Here's a quick summary:

There are 4 ESX systems that will connect back to the storage. All have 2 hardware mirrored drives for ESX 3.5 and 8 NICs (gig ports). The NetApp disks they will be connecting to via iSCSI are 500gb 7200k SATA drives and I just came across some info from user forums indicating that SATA drives increase the overall latency and will cause ESX not to function reliably. Fiber channel disks are not an option at this time. I have at least 2 full shelves to use in this setup with 2 NetApp controllers that are clustered. We're not adding any high transactional systems to the virtualized environment (eg: SQL, Exchange, etc) at this time.

I'd like to hear what the recommended aggregate size, luns, etc for the NetApp config should be? We plan on hosting roughly 10 or so servers intially with the ability to add another 5 to 10 later. I want to make sure it is set up optimally using the SATA disks. If or when we do plan to virtualize transactional based systems we'll definitely purchase FC disks, but for now I have to use what we have.

Reply
0 Kudos
15 Replies
Troy_Clavell
Immortal
Immortal

here is a whitepaper i found... hope it helps.

http://media.netapp.com/documents/tr_3428.pdf

kjb007
Immortal
Immortal

Since the aggregate doesn't define the RAID groups, I prefer to use a large aggregate, and as few as I can. Some NetApps allow larger number of drives per aggregate than others. Then, inside of the aggregate, I will create multiple RAID groups. Since you're using 500 GB drives, I would not create large RAID groups, because in case of failure, your group will take longer to rebuild. One good thing in the NetApp, is that you have double parity (RAID6 or RAID4, I prefer RAID6/RAID-DP), so you can afford multiple drive failures.

I would say put no more than 5 TB per RAID group. Since you only have a few shelves, create 1 aggregate, then create 1 or 2 RAID groups per shelf, so that should be ~ 14 drives per shelf, so in case of failure or hardware maintenance, your RAID group, and utlimately, your LUNs, don't span more than one shelf.

Then, create your LUNs out of those RAID groups.

Hope that helps,

-KjB

kjb007 edited : Fixed spelling and added 1 or 2

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
mike_laspina
Champion
Champion

Hi,

I did a fair amount of research on NetApp, Aggregates, WAFL fragmentation, RAID 5 and SCSI vs SATA.

The maximum array size is 14 using RAID DP which you would aggregate two of for best performance.

I found that the disk array size is an important factor. The larger a single array the more you can push I/O on individual LUN's

This is due to the NVRAM and burst based read/write block algorithm. WAFL will queue up the disk work and perform it in blocks of optimized disk I/O work.

This will degrade slowly as fragmentation occurs so you need to defrag the system frequently (She is high maintenance, no offence to the ladies)

With SATA you will need to use RAID DP because the reliability of SATA is not that great. The drive itself will not die but data integrity is an issue with cheaper media. (surface error rates!)

If you use Raid 5 you would need to constantly scrub the drives for media errors to prevent data loss in the event of a whole disk fail.

If you plan to use the SNAP features of the FAS products you will want to factor in that 40-50% of your storage will be need to be reserved to allow for proper over subscription of the filer capacity.

SATA lost the battle in my case but as long as your IOPS are reasonable everything will be fine due to the NVRAM design sucking up the shortfall in disk performance.

lastly I did some elemental performance calc's so I could see where I stood when using SATA vs SCSI.

Ball Park

IOPS

SCSI 10K

260

SCSI 15K

340

SATA 7.5K

90

SATA 10K

145

Read Func = IOPS * (n-1)/2

Write Func = IOPS * (n-1)/3

Read

Write

Read

Write

Read

Write

Read

Write

Array Disk Count

3

4

7

14

Disk RAID 5 - SCSI 10K

260

173

390

260

780

520

1690

1127

Disk RAID 5 - SCSI 15K

340

227

510

340

1020

680

2210

1473

Disk RAID 5 - SATA 7.5K

90

60

135

90

270

180

585

390

Disk RAID 5 - SATA 10K

145

97

218

109

435

290

943

628

http://blog.laspina.ca/ vExpert 2009
Reply
0 Kudos
Lifes_Gamble
Contributor
Contributor

So, for best performance I need to create a large aggregate with 1 raid group containing 14 disks and it's recommended that my luns don't span more than one shelf. Correct? Any benefit on creating 2 raid groups per shelf?

I now have a question on the number of luns...

How many luns should I create and at what size? I read in another forum that 500gb luns are best for VMs (can't find the link to the post), but that seems like an awful lot.

Here's an example of what I was thinking, considering your comments:

Server A: 1 lun for OS 30gb; 1 lun for data storage

Server B: 1 lun for OS 30gb; 1 lun for data storage

4 luns total for 2 servers... So if I kept this same config pattern I would need 20 luns for my 10 VMs? Thanks for your help everyone.

Reply
0 Kudos
Rumple
Virtuoso
Virtuoso

you don't create a LUN per disk in ESX...you create a LUN to store all the disk files (hence the 500GB size) as most times we run a 10 or 15GB O/S and somewhere around a 20GB Data disk (90% of my VM's use 2x10GB Disk files and that plenty...

If you are using NFS then the recommendation of 3-500GB doesn't apply as nfs is file level locking not lun level locking. Typically with FC or ISCSI you'd try to stay in the 10-15VM's (think 2 disks/vm) range so that when you start, or snapshot a VM the impact is minor. IF you have 50 or 60VM's on a massive lun and it gets a lock on it, you might have performance issues.

Reply
0 Kudos
kjb007
Immortal
Immortal

The benefit to having 2 raid groups per shelf is increased redundancy. In that scenario, however, using RAID-DP, you will be losing 4 disks instead of 2. The difference, using your disks, is the loss of 1 TB, but having twice the redundancy per shelf. Remember, also that a larger RAID group size, in the sense of failure recovery, is that it takes a longer time to rebuild the RAID group if a disk or two fail. This recovery time leaves you in a vulenrable disk, whereas another disk failure will destroy your data.

So:

With 2 RAID groups, you have shorter recovery time, and less vulnerable time in case of disk failure, but you will also lose some storage.

With 1 RAID group, you will have increased recovery time, increased vulnerable time incase of failure, added storage, and possible added IOPs.

Ultimately, you will have to decide which is more important and more relevant in your environment.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
mike_laspina
Champion
Champion

Hi,

The optimum resource(space) utilization of NetApp is RAID DP. That configuration will give you a balance of FT and performance. If you have more that 2 of 14 drives failing, something is very wrong with the product. Which can happen but it would show up during your 30 day burn in before product usage. If you use more raid groups you will waste more space and will not have optimum performance across the array. The probability of failure will be lower with smaller array's but I do't think thats an issue.

I have never witnessed todays disk products with a failure rate higher than ~10% before the MTBF so if you had 2 drives go at the same time on 14 that would be a 14% failure rate which is very abnormal.

If you want to use the snapshot capability on RDM's then you would use LUNs per VM. But generally NFS on NetApp would better for that function. So VMFS is what the other posts are referring to for LUN size.

LUN size is somewhat variable in that the application can skew the size. Generally todays OS's will be optimized on 500-750GB LUNS. I have 500GB and it works out to 15-20 VM's per LUN which is fine.

You wan to be careful not to have too much I/O on a single LUN as it will cause SCSI reservation problems.

http://blog.laspina.ca/ vExpert 2009
Reply
0 Kudos
urgrue
Contributor
Contributor

I had horrible experiences with iSCSI. Fought with it for about a year before I gave up and switched to NFS. One problem was extreme slowness, and the second, bigger problem, was how iSCSI and VMFS react to that. They just die, and you have to fix it all by hand. NFS is FAR better at recovering from errors.

For one thing, DO NOT use the software initiator. It completely bombs on higher loads. Also, please do realize that RAID-DP on SATA disks is a lot slower than you may realize. Particularly in IO, so you really can't trust the performance of a few benchmarks. The problems will only surface when you've got lots of VMs = high IO.

Second, don't use iSCSI if you can use NFS. There's nothing better about iSCSI. And a LOT of things better in NFS. Especially on NetApp.

http://viroptics.blogspot.com/2007/11/why-vmware-over-netapp-nfs.html

http://storagefoo.blogspot.com/2007/09/vmware-over-nfs.html

If you want details on the kinds of problems I've had you can read about some of them here:

http://communities.vmware.com/thread/114466

http://communities.vmware.com/thread/84540

http://communities.vmware.com/thread/85309

http://communities.vmware.com/thread/83454

I know iSCSI works great for lots of people, but given how well NFS works and how much simpler and more flexible it is, I can't personally think of any reason to use iSCSI.

Reply
0 Kudos
mike_laspina
Champion
Champion

Everything on SATA will be slower. And unless you can constantly scrub the media you really need to use RAID DP to protect the data on it.

With SATA you will need to use RAID DP because the reliability of SATA is not that great. The drive itself will not die but data integrity is an issue with cheaper media. (surface error rates!)

NFS is a good option, used it myself.

I would not use a larger raid group with NFS, 7 drives per group would be better and do not put swaps on it.

http://blog.laspina.ca/ vExpert 2009
Reply
0 Kudos
urgrue
Contributor
Contributor

I would not use a larger raid group with NFS, 7 drives per group would be better and do not put swaps on it.

I have heard this "don't put swaps on it" mentioned many times, but never a real explanation of why? Would you happen to know?

Reply
0 Kudos
mike_laspina
Champion
Champion

There are two major reasons for this and the first one is critical.

1. NetApp will fragment very quickly when you combine snapshots and temporary working files in the same store point as VM file sets.

2. Swap files when active will generate serious amounts of I/O which will kick a SATA RAID DP array.

You would want to create a separate store for swap files.

http://blog.laspina.ca/ vExpert 2009
Reply
0 Kudos
mmurrin
Contributor
Contributor

It sounds like we have a very similar setup to the orginal poster. We just purchased an FAS2020 with 2 SATA shelves. Most of the VMs we plan on putting on to the SAN are not very IO intensive at all, but the file servers. Right now we have one physical file server for our whole company, about 300 users. I am thinking of breaking up the file server into 2 servers and have the data for the servers on different aggeriates. Does this sound like a good plan?

Also, it was never mentioned to me when purchasing the FAS2020 that it will need to be defraged regularly. How do you go about doing that or do you know of any documentation on defragging a netapp filer? We were just told it was going to be low maintance and once we are done with the initial setup it would just run.

Reply
0 Kudos
mike_laspina
Champion
Champion

Sales people rarely tell you the down side of any product. It's not a show stopper. It can be more complex to correct a fragmentation problem in some configurations if the raid groups are larger because in some cases the temp I/O storage needs to be isolated off to a separate raid group. The wafl scan relocate function is the defrag command, you can use schedule online defrag from the GUI, I have not got my hands on NetApp system currently so have a look around for it. For the most part a low I/O system will not have any issues with fragmentation and it does not usually become an issue until the space usage is above 60%.

http://blog.laspina.ca/ vExpert 2009
Reply
0 Kudos
mmurrin
Contributor
Contributor

Thanks for your reply it is very helpful.

Reply
0 Kudos
jsykora
Enthusiast
Enthusiast

I've got a FAS2020 with a set of 12 300GB SAS drives internal and no shelves for now.

Have you considered purchasing a CIFS license for your NetApp and moving your file server roles there? Lots of benefits from what I've seen. I know of a fellow using one of his FAS 3020 heads with 5 SATA trays to serve up file server roles quite well. I'll be migrating all my file server roles over to CIFS on the NetApp soon also.

Reply
0 Kudos