VMware Cloud Community
sel57
Enthusiast
Enthusiast

Copying RDM to (same) RDM, over fibre or ethernet?

If I'm logged into a Windows 2012 Server (residing on an ESXi 6 host with 2 teamed 1 Gb nics) that has an RDM attached and labeled as the 😧 drive, and I'm copying files from D:\Data1 over to D:\Data2, would that data travel over fibre or copper? I was going to figure out if the speeds were appropriate, then realized I really didn't know how the data travels. (the speeds would seem low to me regardless, but I'd still like to understand it more)

If there is a better sub-community for this post, my apologies. Thanks!

Tags (2)
9 Replies
rcporto
Leadership
Leadership

How you connect that RDM disk to your virtual machine/ESXi host? Anyway, the data will be copied inside the guest OS, without leave the guest OS... but after the copy is committed to the virtual disk, with will be copied to the backed storage, and the way it will be copied depends of how that RDM disk is mapped to your virtual machine. Are you using iSCSI or Fibre Channel.

---

Richardson Porto
Senior Infrastructure Specialist
LinkedIn: http://linkedin.com/in/richardsonporto
Reply
0 Kudos
sel57
Enthusiast
Enthusiast

Hi Richardson,

Thanks for the reply. Each ESXi host is connected via two 8Gb Fibre Channel HBA's, So if this copy from/to the same RDM inside the os is going to and from the (all 10k) storage array via Fibre, then I have some serious issue because 200Gb should not take 2 hours

Reply
0 Kudos
rcporto
Leadership
Leadership

Did you investigate the disk latency or disk queue using the Performance Monitor? Your disk C:\ is a virtual disk? If yes, if you copy data from C:\Data1 to C:\Data2, do you get better results?

---

Richardson Porto
Senior Infrastructure Specialist
LinkedIn: http://linkedin.com/in/richardsonporto
Reply
0 Kudos
sel57
Enthusiast
Enthusiast

I'm actually in the process of running those disk IO reports right now. Smiley Happy

C: is part of a shared datastore while 😧 is an RDM, but both are part of the same array/pool.

Reply
0 Kudos
kastlr
Expert
Expert

Hi,

what exactly did you expect, as your test requires to read 200 GB and write 200 GB, and all from the same LUN?

So you're only talking to one device with a limited IO queue depth which has to answer parallel read and write requests.

When I remember correctly, W2K12 also useS different IO sizes for read (small IO's) and writes (large IOs).

So based on the numbers you provided the transfer rate isn't that bad.

(200 GB (read) + 200 GB (write))/(120*60) will result in appr. 55MB/sec, that's not bad for a single LUN.

Regards,

Ralf


Hope this helps a bit.
Greetings from Germany. (CEST)
Reply
0 Kudos
sel57
Enthusiast
Enthusiast

Where are you getting those numbers and how would I do the math for future scenarios? This is not something I'm familiar with, but would be interested in understanding it more.

Reply
0 Kudos
kastlr
Expert
Expert

Hi,

as described earlier, to copy/move data (on none VAAI capable arrays) the host has to read all data and write it to the new location.

In your scenario you mentioned that the job to copy/move 200 GB takes 2 hours.

Because read and writes are different operations, the host has to read 200 GB and has to write 200 GB, so the total amount of data to handle is 400 GB.

2 hours is equal to 120 minutes, times 60 will result in 7200 secs.

Dividing amount of data by time needed (400 GB/7200 sec) will bring you the average transfer rate per second, in your case this would be 55 MB/sec.

You could easily verify the speed of your target LUN using esxtop from the ESXi ssh shell.

When starting esxtop, press U to switch to the disk mode, and there you'll find all relevant information about the current performance and load of each attached LUN.

Keep in mind that throughput also depend on the used IO size, and that a LUN/array could only handle a limited amount of IO's per second.

And that different IO sizes would also require different processing time, i.e. a small 4 KB IO will be much faster to process compared to a large 1MB IO.


Hope this helps a bit.
Greetings from Germany. (CEST)
sel57
Enthusiast
Enthusiast

Thanks kastlr. That's very helpful. Do you have any resources that explain how IO queue depth works into the equation? I mean, I'm somewhat familiar with disk IO and what my particular disks are rated at, how much IO the array can take collectively, or be abused in some cases, but I also know there's much more to it that I don't understand. What I didn't quite comprehend was the part about talking to one device with limited queue depth. Was the one device you're referring to the vm in question, or the storage array?

Reply
0 Kudos
kastlr
Expert
Expert

Hi,

you should start with the following articles.

Troubleshooting Storage Performance in vSphere – Storage Queues

What is Storage Queue Depth (QD) and why is it so important?/

Regards

Ralf


Hope this helps a bit.
Greetings from Germany. (CEST)