we are currently testing ESXi on our IBM HS21 Blades. Most of it seems to run fine. The only big problem we have is the write performance when writing to the disk.
The same server performed very well under Open SUSE 10.2.
ESXi is the latest software version.
The internal 146GB SAS harddrives are configured as raid1 via the integrated LSI 1064e raid-controller.
I did the following to test performance:
1. Copy a big file via scp to the ESXi: 3,6 MB/s
2. Read the same file back: 15,5 MB/s
The same test under Open SUSE brings 29,1 MB/s and 26 MB/s.
Of course scp is not the best tool to measure performance. But it can at least show if a disk is fast or not.
I think the HS21 should be used by many users with ESXi. So I'd like to know how the systems do perform on your installations.
Thanks and greetings,
Why are you writing to the same disk the operating system is on to begin with? You're writing into a 32 meg hypervisor, with SCP, and you wonder why it's slow? Blades use external storage for most everything. I don't know anyone who's using them for virtualization that keeps anything on the local disk except for the ESX install. "But it can at least show if a disk is fast or not."...Not in this case. You are comparing a hypervisor to a standalone operating system(like windows or suse). Why would you expect that it would be slower than a 32 meg operating system?
To help you out though, I consistently get around 7 m/s on my workstation, so you can't really determine much from scp.
We attach our HS21's to san...performance under ESX is certainly not slow by any stretch......
thanks for your answer.
I see your point, but with the following info my question should get clearer.
1. We use FC-SAN systems as well. This is something I haven't tested yet under ESXi. But there are some servers that can not be attached to the SAN for certain reasons. These servers should run ESXi nevertheless.
IBM has a storage expansion blade especially for this purpose. You can use it in raid 5 configuration and the system has the same behaviour as the internal disks.
2. Ok, let's assume SCP is just running badly under ESXi.
I did the same tests with a mounted NFS share. I copied an vmdk-file from the NFS-mount to the ESXi datastore on the internal disks.
It is much too slow when accessing one of the internal disks.
And NFS should be running very well because it is one of VMWares offered ways to access datastores.
Do you really think they left out driver tuning for the internal hard disks because most customers use blades with SAN-devices?
Do you other guys share Williams opinion that it doesn't make sense to have ESXi store it's data on the internal drives?
thanks a lot,
That was rude of me...I apologize.
Here's the thing. The hypervisor does one thing...Manage resources. It does not have the built in wherewithall to be a full-fledged operating system. So, it's not going to act like a true operating system, because it's not one.
To the second part, yes, the bladecenter disks are fairly quick SAS kit. Stick windows or linux directly on the blade, and it will give you the numbers you quoted. But here's the thing... Stick a VM on the esxi host, point it to the attached disk, and it will ALSO give you those numbers. Amazingly the vm will be faster than the hypervisor it operates under. The hypervisor itself is not built to be a file server, it merely controls virtual machines.
Yes, it was!
Thanks again for the enlightment. I will give it a try and will report back. If it's really like this I will be pacified concerning my VMs' speed.
One problem will remain:
I planned to tranfer my current VMWare Server VMs via NFS to the new ESXi machines. If it is that slow it will take forever.
What is the best and fastest practice to transfer VMs to an ESXi server?
Thanks and greetings,
You could always use the virtual center client...Under summary of the host, double click the lun(or in this case disk). On the top band you'll see what looks like two hard drives, one with an arrow pointing up, one with an arrow pointing down. These are the "copy from" and "copy to". I find it's much faster than an external process(nfs/scp).
In any case, there is a risk to using the internal drives, even in a raid 1. If you lose a drive, you have to power down, remove the blade, insert a new drive, re-insert the blade, etc.....Get's a bit tedious. If there's any way to move this to external storage, it would be best.
But not with the Expansion Blade. Drives are accessible and hot-swappable.
I don't need HA for some servers, so powering down is ok at all.
For others I use the SAN which is - you are totally right - the best solution.
I will report back some performance specs next week.
I hope you are right concerning VM-speed.
Have a nice weekend.
You could use VMware Converter to do a V2V migration. It works well and you could even reconfigure the VM in the process, if you need to (i.e. change number of vCPUs, change disk sizes).
Technical Director, Virtualization
VMware Communities User Moderator
unfortunately there was absolute no difference between the native speed under ESXi and the speed within the VMs.
I did tests under Debian and under Windows (using I/O-Meter) with the same results: 6 MB/s
The same test under VMWare Server 1.x delivers values of about 130 MB/s.
How can this be?
Is nobody else using ESXi on the internal drives of an HS21?
Having this situation, we probably will continue using VMWare Server.
Or are there any other suggestions?
We have an IBM HS21 Bladecenter. We´ve installed ESX Infrastructure without any news. But we have not been able to install ESX 3i on another blade.We always get:
The installation operation has encountered a fatal error:
Unable to find a support device to write the VMware ESX Server 3i 3.5.0 image to.
We are seeing a similar issue with our servers. I am getting very slow file upload speeds to vmware, i.e. of a iso image or a VM from another server. We have a vm living on on another server, ~160GB, and copying it over at 2MB/s is quite painful. I get ~2-3MB/s upload using scp, and about the same using the datastore browser or vmware convertor.
Details of our setup:
IBM x3550 servers (8 cores, 32GB ram) connected with two Gig ethernet ports, with NIC teaming & VLAN's on the vmware side, EtherChannel and Vlan Trunking on the cisco switch side. You'd think this would provide excellent performance. I've confirmed this is not a cisco /switch issue, as I've tested with a direct crossover cable with the same results. Running VMware ESXi 3.5 U2.
On the storage side, we have a IBM DS3300 iSCSI SAN, connected with 2 gigE ports to the same switch.
When I upload files to either the SAN or local server storage (15k RPM SAS drive), using either scp or the datastore browser, I get ~2 - 3MB/s speeds. If I ssh into the server ( unsupported console mode), and copy a file on the SAN, I get ~3MB/s copy speeds (i.e. cd /vmfs/volumes/sanvolume; date; cp filename filename.new; date). When I perform the same test on the local hard drive datastore, I get ~18MB/s, which is fine.
I haven't had a chance yet to run benchmarks on the VM's themselves, maybe this issue is confined to the VMware hypervisor itself, and not the VM's running on it. In any case, it seems absurd that I can only get 3MB/s uploading files.... I noticed Veeam has a fastSCP product that is supposed to help, but a verison with support for ESXi has not been released yet.. I loaded Centos on one of the servers and was able to get great network/disk performance, quickly tested using iperf and scp'ing some files up.
I'm having the same performance issue with a Dell PE2800 with ESXi 3.5 upd2.
I only get 2,5 - 3MB/s transfer rate.
I'm trying to move a VM from one ESXi to another over Gb Ethernet .
I've tried SCP and VMware converter. Both are giving terrible performance.
I also tried to upload files within the "Datastore Browser" with the same result.
This must be a performance bug of some kind.
Does someone have a solution to this problem or maybe a workaround?
Anyone? I have an open call with vmware tech support, and I did get a boost in performance within a VM guest, with the following changed at the cmd line:
esxcfg-module -s iscsi_max_lun_queue=64 iscsi_mod (default is 32).
This gave me an approx 25% performance increase within a Windows Server 2003 vm (from ~21MB/s sequential writes to ~28MB/s).
However, I am still seeing the slow performance copying files on the iSCSI volume, either on the cmd line or from another host using scp (or even Veeam). I am not seeing these issues with an identical server loaded with RHEL5 & Xen, both at the OS level and the guest VM level (seeing ~50MB/s writes under a Windows 2003 guest on Xen, same hardware all around). Still working with support but would love to receive other suggestions from the community.