I have been reading thread all over the Internet for two days and every discussion of this gets into the weeds and goes nowhere... and usually ends with "use Veeam", which doesn't seem to really work any better when I tried it.
Please, before you respond, read the who post and let it sit for a moment or two. As I said... I've found dozens of posts about the same thing and they all take the same path... which looks like a bunch of overly smart guys not thinking through the question, just reponding to up their post counts. I'd like this to be a meaningful thread that finds answers for me... and all those who've asked before and ended up with nothing.
Background
I am migrating a small environment from ESXi 4.0 & 4.1 to new hardware running ESXi 5.1. I bought the VMWare Essentials package so I could utilize the full 48GB in the new hardware... assuming it would also fix my upload speed issue... but it did not (I could see a free product being throttled, but keeping that throttle once licensed is unethical).
Issue
Uploads of large VM disk files downloaded from the old 4.1 environment to the new 5.1 environment are stupidly slow. I have done the following to narrow this down to "it must be VMWare 5.1 throttling these uploads, even though it is now licensed". But that seems crazy, immoral, and agaisnt the law (or it should be).
So I'm asking the community to avoid the off-the cuff answers and dig in on this one, because from what I see, this impacts a bunch of people and no one seems to have a valid answer. For me, I have a couple hundred GB to migrate and don't want to take 3 months of weekends to get it done when I should be able to do it in a couple of weekends.
Findings
1: File copy from a VM on ESXi 4.1 to a VM on ESXi 5.1 screams (maxes out 1 GB NIC)
2: File copy from a VM on ESXi 5.1 to a VM on ESXi 4.1 screams (maxes out 1 GB NIC)
3: Download a file from datastore on ESXi 4.1 to local workstation on same network screams (maxes out 1 GB NIC)
4: Upload a file from local workstation to datastore on ESXi 4.1 on same network screams (maxes out 1 GB NIC)
5: Download a file from datastore on ESXi 5.1 to local workstation on same network screams (maxes out 1 GB NIC)
6: Upload a file from local workstation to datastore on ESXi 5.1 on same network BLOWS CHUNKS the size of Texas (uses only 6-8% of 1 GB NIC)
So for items 1 - 5 to work exceptionally well... my disks are working fine, my drivers are working fine, my config must be fine also... on both platforms. It has to be something in 5.1 is throttling uploads... even when licensed. Veeam doesn't help.
What says the community???
Thanks Everyone!!
Mike
PS... For those who simply must know before the neurons can fire:
The new 5.1 servers are Dell R420 systems configured as follows:
- 2 x 6Core Xeon E5-2420 1.9 Ghz 15MB Cache processors
- 48 GB 1333Mhz RDIMMS
- 4 x 7.2K 1TB Near-Line SAS 6Gps drives in a RAID 10
- PERC H310 Raid Controller
- Onboard Broadcom 5720 NICs
The systems won't support dozens of VM's each but they should be able to facilitate burying the NIC with a simple file copy... which they can do in every configuration except 1 (see items 1-6 above).
Thanks
Sorry this is not an answer to your question, but I'd like to know the why this is not working aswell.
I have a similar situation to you. I'm trying to simply copy some VM's between two ESXi hosts and can't seem to get any faster than 6MB/s over a Gigabit network even when the two hosts are on the same switch and all the VM's can fully utilise the Gigabit speeds from within their operating systems.
What on earth is going on VMware? We've paid for our licenses and we are still getting slow speeds!
How do you copy the files?
With the vSphere gui - browseing the datastore and choose upload?
Are you using the Windows vSphere Client or the vSphere Webclient?
With SCP?
Directly to the storage over NFS/iSCSI?
// Linjo
I am using the Browse DataStore method in the vSphere Client running on WIndows. I select upload file and it takes forever. I can download the exact same file in the same manner at full network speed (downbload is a full 10 times faster than upload).
I have also tried using FileCopy through Veeam with the same outcome.
The servers are remote (by 800 miles) and I'm not goping to be driving up there to enable SCP or FTP or anything. I did not have this issue with the 4.1 Servers, so I was not expecting any issue with 5.1.
I've also read numerous threads where FTP and SCP were tried and may have helped a tiny bit, but did not fully fix the issue. It was a small bandaid on a large wound.
So I don't think it is the method of upload that is the root problem.
The upload from the browse menu is not designed for highperformance-transfer, for example everyting gets encrypted with SSL during transfer.
To get better performance try to upload directly to the storage with NFS/iSCSI instead.
// Linjo
I'm using local DAS so I can't use NFS or iSCSI to attach directly to storage to bypass VMWare's weak implementation of file copy.
Common sense tells me that while bypassing the problem might look like a fix to the problem... it is not... it is only a workaround... that is not available to everyone.
Additionally, I'm not buying the "wasn't built for high performance transfer" thing. I'm not asking for high performance at this point... simply copy one file at a reasonable speed.
It can do it in download... so why not in upload??
Version 4.1 can do it in both upload and download. So why can't 5.1 do it in upload?
Chasing obscure workarounds is what I've seen in every other thread I've found on this topic and none of them reached resolution. I'd like to see this thread either reach resolution... or at least achieve wide-spread consensus that this is a bug VMWare needs to fix... and not be apologists for them.
Sorry that my suggestion was not what you where looking for, just tried to assist... ![]()
You should create a support ticket with VMware support to get the official answer or file a feature request since its not sure that you will get the answer you are looking for here.
// Linjo
I've got a Perc H310 also in my Dell T320 and I'm seeing the same problems. I'm hoping to get a hold of the H710 that has cache onboard and see if it makes the difference. It's still all very confusing considering none of my Virtual machines on the ESXi host with the Dell Perc H310 have any problems transfering data at Gigbit speeds.
Precisely!! Why can the virtual machines go at gigabit speed in both directions, and the Datastore Browser only go in gigabit speed in download while upload speed is abysmal. That defies being hardare related as the host server cab saturate the link from a virtual and can barely register tr4affic when using Datastore browser.
And if you notice my test results in the orginal messasge... this "feature" was introduced in v5, as my v4.1 servers move data up and down through the Datastore Browser with no reservation.
Thanks for speaking up and verifying you see the same issue. I will be extremely interested in knowing how your H710 works.
I am currently deployen a Dell Poweredge T430 with PERC H310 controller and experiencing the same issue.
Been looking into it for days now, but still did not manage to solve te problem ![]()
This is driving me crazy.....
I'm having similar slow issues with uploading to ESXi 5.0 on one of my customer's IBM server, they have two identicle servers, both have IBM direct attached storage, one of them uploads fast the other one is slow.
Transfering files to a Windows VM located on the slow server is fast for the first 5 or 10GB, from what I have read is that Windows stores or caches the writes to memory first then writes them to disk, when I start the transfer I can see the memory usage on the target windows VM going up from 2GB to 6GB, as soon as the transfer completes the memory usage drops again, the first few GB of the transfer goes at around 50MB/s then it slows down to around 3MB/s.
Copying files on that storage from one folder to another via the vi client browser is also slow, downloading the files from the storage back to windows is fast.
With the other server transferring files to and from the storage is fast regardless of what I use, vi client, Veeam, windows VM.
The slow server also has another direct attached unit connected to it via a different host adapter, uploading to that storage is fast.
At this stage I believe that either the write cache isn't enabled on the Raid controller that is connected to the IBM storage or that it's set to write through, I'll do some testing in a few days and let you know the outcome, it has an IBM MR10M controller with battery backup.
Have you found a resolution to this issue? I have noticed the same issue when working with my test server which is also running 5.1.
Hello,
maybe I'm now a little bit of arrogant, but the problem is with one of your network card. But I cannot say its the NIC in your Workstation or the Broadcom from DELL.
Why I believe this - see an old entry from me with VMware Converter and (http://communities.vmware.com/thread/330060) and Broadcom NIC.
And a couple of minutes ago I tried to reproduce your Problem with my slower ESXi Server (32GB RAM, E3-1220 CPU) and my actual File-Server and my old, maximum of 33MByte/Sec transmitting Server from Year 2010 (should be disposed, but still works...).
Here my Results, see attached Screenshot:
Actual File-Server --> ESXi 5.1: 60MByte/sec, Peak 88MByte/Sec
Old File-Server --> ESXi 5.1: 27MByte/sec, Peak 33MByte/Sec
Acutal File-Server --> Old File-Server / Windows 2012 --> Windows 2003: 33MByte/sec according to Task Manager.
Means - my old File-Server is the slowest Part into this environment.
NICs:
ESXi 5.1: Intel 82574L
Fileserver 2013: Intel 82574L
Fileserver 2010: Broadcom NetXtreme
My suggestion: If you have an certified Intel NIC around to put into your ESXi Servers (see VMware HCL), test with it.
As I just have Intel or Braodcom NICs and no RTL or Atheros Cheap-NICs, I can just assume that the NIC is your Issue.
Regards,
Josip.
RE kozzy30320111…
Your issue could be a non-enabled / not present / defective battery for the write cache.
I deal just with HP Servers. They have the SmartArray P212/P410 or new P222/P420 as the default RAID Controllers.
When the Battery or Flash Write Cache (BBWC or FBWC) is disabled, defective or not present, I have an upload ratio to the ESXi Datastore of 3 MByte/Sec.
An direct to the on-board SATA Controller attached SATA-HDD over 60MByte/sec.
When adding the Battery to the RAID Card - all went fine.
Upload normally over 80MByte/sec on big files to the Datastore wich resides on the RAID Controller.
Regards,
Josip.
We checked the volumes today within the raid manager, all the volumes were set to write back with BBU (battery backup), the raid manager says the battery backup is present and it doesn't report any issues, we changed the volumes to "always write back" and the storage is now fast.
I will get in touch with IBM to see if this is a bug, the firmware of the controller was upgraded recently to try fix this slow issue but it didn't make any difference, I know it's not recommended to leave the volumes as "always write back" because of the data corruption that can occur with a power failure if the BBU is faulty or not present but this is their DR server and it's useless if Veeam cannot write to it at a resonable speed.
For others with the same issue test your storage first, make sure that it really isn't the issue, if you are testing by sending data to a Windows VM that is located on the storage send it a 50GB file and see if the transfer speed is maintained, for me the speed would drop off after 10GB was transferred, towards the end of the transfer it would only be transferring at 6 - 8 MB/s.
Try a product like Veeam to upload to the storage and see if that makes any difference.
I have had the same issues and apparently it's not a bug but rather to protect the host. What I have done as a workaround was to use a large sata disk to copy the files locally and then pop the drive into the receiving server and copy it there locally. Brought my transfer of the vm's I had down from 14 hours to 3.
I had same issue with my testing server - upload to datastore via vSphere Client is VERY slow. I enable SSH on ESXi host, run temporary HTTP server on my Windows desktop (Moongose) and download my 150 GB image by wget to datastore. I hope this helps someone.
I have the same problem,
Server R310
Intel Xeon X3440
raid1 1Tb and 2Tb
Broadcom Netxtreme II BMC5716
i copy to the server with scp and the transfer fall from 48 Mb/s to 10 Mb/s after a couple of minutes, i check the driver and it is the last one.
change the switch doubing of it and the same result. Also check the transfer with another R620 esxi 5.1 and it go fast(48 Mb/s).
i apreciate any help.
regards.
It has high-performance download... why not upload? What is the difference in direction?
How do you use wget to access the VM's in the datastore? I am in the same boat. I have 1.5 TB's of VM's on a local ESXi server I need to move to another in a datacenter 1000 miles away.
BTW, I have tried using ovftool, scp. They are very slow. Around 300-500 KB/sec
