VMware Cloud Community
admin
Immortal
Immortal

I/O performance of vSphere

I am a senior performance engineer at VMware. I have spent quite a bit of time on performance analysis of ESX storage stack starting from ESX 3.0. We at VMware are constantly working on improving I/O performance of ESX. I have done few experiments to drive extremely high I/O load on single instance of ESX. With 3.5, I obtained 100,000 I/OPs with an I/O load that is most representative of real applications (100,000 I/Ops) until I ran out of hardware. The performance envelope was pushed furthe with vSphere when we achieved 350,000 I/Ops in an experiment done at EMC labs. I wrote a blog highlighting the results with some details on the experiments in VMware's performance blog VROOM.

Having a well configured I/O system is critical for good application performance. Very often I hear questions from customers on storage performance, choice of virtual disk format - VMFS vs RDM, best practices etc., The answer is simple - follow the best practices that you would normally follow in native world when designing an I/O infrastructure for your application. ESX provides excellent I/O performance and can support even extreme I/O demands from applications as the results discussed in the blogs indicate. VMFS or RDM - you can expect similar performance though RDM can help during certain scenarios which are purely non-performance related.

We can discuss more in this thread. Feel free to post your questions, comments on the blogs or any I/O related issues on this thread. I will try my best to respond. May be some one who has already faced a similar situation will jump in with a solution even we at VMware wouldn't have thought of!

Chethan

Reply
0 Kudos
31 Replies
dconvery
Champion
Champion

Jas -

Give it up! What's the caveat for Linux? I have some customers with sizeable Linux environments too. Also, is mbrscan a freeware tool? I don't work for a NetApp partner right now.

Duncan -

That's what I thought. If your data is on an aligned drive, ther is probably no reason to worry about the system drive, but I wanted some other opinions as well.

Dave Convery

VMware vExpert 2009

Careful. We don't want to learn from this.

Bill Watterson, "Calvin and Hobbes"

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"
Reply
0 Kudos
jasonboche
Immortal
Immortal

Jas -

Give it up! What's the caveat for Linux? I have some customers with sizeable Linux environments too. Also, is mbrscan a freeware tool? I don't work for a NetApp partner right now.

Sometimes (most of the time) after running NetApp's MBRALIGN on Linux VMs, the VM is no longer bootable. It will immediately hang at a GRUB prompt. There is a repair procedure that involves booting from a Knoppix CD and running a few commands to repair the boot loader of the affected Linux VM. I've been through it dozens of time. Once the boot loader is repaired, the Linux VM will boot and its partitions will be aligned. I have seen rare cases where the fix does not actually work and you have no choice but to revert to the backed up .vmdk files that mbralign automatically creates for you. At that point, you can try the align process again or give up. The align process takes a while and it varies upon how large each .vmdk file is of course. On average what I see is an alignment of a 50GB .vmdk file in half an hour or less.

- Once the knoppix CDROM has booted, From the 'boot>' prompt type 'knoppix 2' and hit RETURN

- From the Command Line, type 'grub' to get to the grub prompt

- Run "find /boot/grub/stage1" and note all of the drives it finds (e.g., "(hd0,0)")

- From the GRUB prompt, for each Drive, Run the following:

grub> find /boot/grub/stage1

(hd0,0)

(hd1,0)

(hd2,0)

grub> root (hd0,0)

Filesystem type is ext2fs, partition type 0x83

grub> setup (hd0)

Checking if "/boot/grub/stage1" exists... yes

Checking if "/boot/grub/stage2" exists... yes

Checking if "/boot/grub/e2fs_stage1_5" exists... yes

Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.

succeeded

Running "install /boot/grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded

Done.

grub>

- You can batch process multiple boot drives, just attach all of the drives you wish to fix to a dedicated knoppix appliance that boots from the Knoppix CD.

You should be able to download the NetApp tools. Just go create yourself a now.netapp.com account and download the mbrscan and mbralign tools

http://communities.netapp.com/docs/DOC-2563;jsessionid=3FAC4EB6245FD8344D98EA9247C2FE34

Don't forget that MBRALIGN creates backups of your .vmdk files which will chew up double the amount of storage you use so go back and delete those backup files once you've determined the alignment is a success.

Duncan -

That's what I thought. If your data is on an aligned drive, ther is probably no reason to worry about the system drive, but I wanted some other opinions as well.

I align all drives as a best practice. Not as a performance benefit for the individual VM, but as a performance benefit for the storage array that all VMs point back to. Disk alignment converts all unaligned disk I/Os from a maximum possible value of 2 IOs to a value of 1 IO. If you multiply that factor by hundreds or thousands you start to see a little performance increase for the VMs. Now multiple that savings by X number of VMs on each LUN, disk group, etc. and you might see how aligning C: drives collectively improves the maximum amount of performance you can squeeze out of that LUN, disk group, storage array, etc. This is an example of where the value of the savings is greater than the sum of all of its parts.

Dave Convery

VMware vExpert 2009

Careful. We don't want to learn from this.

Bill Watterson, "Calvin and Hobbes"






[i]Jason Boche, vExpert[/i]

[boche.net - VMware Virtualization Evangelist|http://www.boche.net/blog/][/i]

[VMware Communities User Moderator|http://www.vmware.com/communities/content/community_terms/][/i]

[Minneapolis Area VMware User Group Leader|http://communities.vmware.com/community/vmug/us-central/minneapolis][/i]

VCDX3 #34, VCDX4, VCDX5, VCAP4-DCA #14, VCAP4-DCD #35, VCAP5-DCD, VCPx4, vEXPERTx4, MCSEx3, MCSAx2, MCP, CCAx2, A+
Reply
0 Kudos
dconvery
Champion
Champion

VERY NICE! Thanks Jas. I always forget about Knoppix (I have the latest DVD image) and always revert to DSL or a RHEL rescue CD.

Dave Convery

VMware vExpert 2009

Careful. We don't want to learn from this.

Bill Watterson, "Calvin and Hobbes"

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"
Reply
0 Kudos
hstagner
VMware Employee
VMware Employee

I have aligned the system partition (and created a template from an aligned VM) in the past as well. I have even written an article on it. Then it occured to me after reading the VMware whitepaper again that I must be missing something. Why does this question (how to align C: drives) keep coming up when the VMware whitepaper on partition alignment has this note:

Note: Aligning the boot disk in the virtual machine is neither recommended nor required.
Align only the data disks in the virtual machine.
The following sections discuss how to align guest operating system partitions in Linux and
Windows environments.

What am I missing guys? I found another thread (can't find it at the moment) claiming that aligned C: drives may cause problems with VSS on certain arrays. Is this true? Does anyone know what arrays have this issue? Should the best practice going forward be to not align the C: drive at all because of the note in the VMware whitepaper? I know I have more questions than answers. I am just trying to get to the bottom of this.

Don't forget to use the buttons on the side to award points if you found this useful (you'll get points too).

Regards,

Harley Stagner

----------------------------------------- Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too). Regards, Harley Stagner VCP3/4, VCAP-DCD4/5, VCDX3/4/5 Website: http://www.harleystagner.com Twitter: hstagner
Reply
0 Kudos
LucasAlbers
Expert
Expert

Windows 2008 automatically aligns the drive starting at a 1024 k offset.

Aligning windows 2003 was tedious. Set alignment with diskpart, format.

Install and use existing file system.

Reply
0 Kudos
cxo
Contributor
Contributor

Reading the instructions for MBRALIGN, it notes with Linux and Solaris running GRUB there may be issues on bootup. The instructions are quite clear on how to remedy this for Linux (as well as documented here), but they leave out the analogous step-by-step instructions for Solaris.

So, the question I have is, has anyone used MBRALIGN on a Solaris 10 VM and what were the caviates and solutions to such issues?

Charlie

Reply
0 Kudos
za_mkh
Contributor
Contributor

Great discussion, especially as we are busy implementing our vmware infrastructure onto our new SAN.

I do have a question/solution... Since it is a tedious process to align the system drive on a Win 2K3 / etc box, would the following scenario be valid:

1) In an existing VM - add a new VMDK (say 15GB) and align this disk as per normal - I don't format it.

2) I then use this VMDK as the system drive for a new VM and configure/format as necessary. I then convert this VM to a Template e.g. Windows 2003 Std Template

3) Would all subsequent VM's cloned from this new template also have their drives aligned, or what I need to go down the knoppix route to solve this

If the above solution does work, then, it could be an easy fix for new VM's created but we would still have to go through the pain for the existing VM's,

Reply
0 Kudos
hstagner
VMware Employee
VMware Employee

Hello za_mkh,

I did write an article on this in January.

Basically, I used a WinPE disk to align a system partition, then I created a template. Every VM cloned from that template will be aligned.

However: Now I question whether this should be done. As I said above, this note is in the Disk Alignment whitepaper from VMware:

Note: Aligning the boot disk in the virtual machine is neither recommended nor required.
* Align only the data disks in the virtual machine.*

I hope this helps.

Don't forget to use the buttons on the side to award points if you found this useful (you'll get points too).

Regards,

Harley Stagner

----------------------------------------- Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too). Regards, Harley Stagner VCP3/4, VCAP-DCD4/5, VCDX3/4/5 Website: http://www.harleystagner.com Twitter: hstagner
Reply
0 Kudos
za_mkh
Contributor
Contributor

Thanks Harley,

That makes sense, and means it our workload will be made us. Since we

are re architecting our Virtual Infrastructure, this comes at a good

time!

Many thanks once again

za_mkh

Reply
0 Kudos
dconvery
Champion
Champion

So...Since VI3 anly aligns VMFS when created via the VIC, how about ESX4? This requires a VMFS on install for the COS VM. Is it aligned?

Dave Convery

VMware vExpert 2009

Careful. We don't want to learn from this.

Bill Watterson, "Calvin and Hobbes"

Dave Convery, VCDX-DCV #20 ** http://www.tech-tap.com ** http://twitter.com/dconvery ** "Careful. We don't want to learn from this." -Bill Watterson, "Calvin and Hobbes"
Reply
0 Kudos
smalldust
Contributor
Contributor

Hi,

I noticed that in the blog article named "350,000 I/O operations per Second, One vSphere Host",you said

Instead only 30 EFDs housed in three CX4-960 arrays provided enough
storage bandwidth for vSphere to drive just above 350,000 I/O requests per
second.

I am wondering why you need 3 boxes and hope there is a way to calculate how many storage arrays are required for a certain IOPS.

Is there any data available on EMC web site?

Thank you.

Reply
0 Kudos
admin
Immortal
Immortal

To support the the number of I/O operations we generated we required large amount of I/O bandwidth which meant multiple I/O paths. Since we didn't have FC switches available at our disposal, we had to use direct links. We kept on adding I/O paths until we hit 350,000 number. At this stage we had twelve 4Gbps FC links and for that we needed 3 arrays with 4 FC ports each. Hence 3 arrays.

I am sure EMC has sizing guidelines for designing a storage infrastructure based on I/O requirements. You can check EMC website or talk to a local EMC rep.

Chethan

Reply
0 Kudos