VMware Cloud Community
rightfoot
Enthusiast
Enthusiast

Fibre Channel: Sharing vdrives with guests, how?

Not sure how to explain this but there are two parts to my question.

1: Most efficient way of running the servers

2: Can I give guests direct access to a fibre channel vdisk?

Ok, I've got two shiny new ESXi servers running.

On each one, I have one http load balancer and a web server running.Each is installed on the local drive.

Each one connects via NFS to a shared web pool to serve up static pages etc.

This is how things are now.

New scenario

I would guess that the fastest speed I could get would be to leave the web servers on the local ESX drive rather than running them over the FC network. Though, some people have replied to me on these forums telling me they are running 100's of guests over theirFC networks so maybe that's not such a big deal.

Either way, the real question is, rather than have the web servers using up Ethernet bandwidth to get at their pages, how could I shared a FC vdisk between these servers? Basically, that means setting up a vdisk on ESXi that allowscertain guests togain access to the disk so that they can serve pages. That better leaves my Ethernet usage for the web connections themselves.

Thoughts, help, would be most appreciated. Thanks.

Mike

0 Kudos
9 Replies
rightfoot
Enthusiast
Enthusiast

Looks like I found my answer right after posting this but still would appreciate some input.

I see that I can add a shared storage between guests but which method is the right one? It looks like it can be done eith either

Looks like I can simply add a drive/datastore to each guest, or, add a drive and use an existing disk I've made available on ESX. I also notice that the vdisk I'd like to use isn't shown in the Raw Device Mappings once I've already added it to ESX's as a datastore.

So, bit unsure of which option might be the best for my scenario. One of the main reasons I'm unsure is that when I do try to create a new drive because the system will sometimes tell me that there is no OS/FS on that disk. How do I get past this? One way is that I've assigned a storage to one server so will now boot it, see if I can add the drive, copy everything over then give the other web server access to it.

Mike

0 Kudos
RParker
Immortal
Immortal

I would guess that the fastest speed I could get would be to leave the web servers on the local ESX drive rather than running them over the FC network.

This is a tough concept to master. FC is bandwidth. Ethernet is speed. You can get water to flow in a 1/2" garden hose at 12 gallons per minute (let's say). Now if you double that garden hose width to 1" in diameter, that means you can get twice the speed right? Wrong. It just means you can get 24 gallons per minute of water, but the flow speed from the spigot to the bucket is constant. You just increase the amount done at the same speed.

Take another example. 2 lane highway. Cars can travel at maximum velocity of 60Mph. Now you do some construction and double the lanes to 4. How fast will the cars go now? 60Mph. Instead of 2 cars side by side you can now have 4 cars side by side, so people can leave the city faster. That's FC.

Ethernet would be increasing the speed, in the same garden hose you still have 1/2" but you can push more water through faster using the same bandwidth. Or your highway can now allow cars to travel at 120Mph. That's the difference.

Most of the time, in fact 99.99% of the problem SPEED isn't the problem, very very rarely do any servers peg the meter all the time enough to cause problems in your network. Allowing traffic to move at 120 Mph (by increasing the speed limit) downtown isn't going to get people home quicker, downtown needs more streets not faster cars to let people get to where they need to be. That's the principle problem. Where you run into issues is bandwidth, not enough room for things to move around, FC does a much better job than Ethernet, Ethernet is a big cloud of data, all colliding in the middle and coming back out. FC is more of a private network, that everything still runs at a constant speed but more things can go through at once.

Where ethernet gets into trouble is there isn't much to control the speed, and when things get clogged you get congestion and thus things start to run erratically.

So the bottom line is FC beats Ethernet for bandwidth and 'stress' networking. The only reason people use Ethernet is cost. It's cheap. You just need some nics and a switch and you are pretty much done. FC requires planning and there are limitations on how you can connect things to Fiber. That being said, FC will give you WAY more than you can see with Ethernet. So that's why people put everything on FC because it's the ultimate in throughput.

Either way, the real question is, rather than have the web servers using up Ethernet bandwidth to get at their pages, how could I shared a FC vdisk between these servers?You can get a network monitor and see what your traffic is doing on the web. If you are using more than 5% of your TOTAL bandwidth for a web, I would be shocked. Websites are just plain inefficient. It's just text, all going through a very simply protocol. You have images and other things, but images can be optimized to make better use of your network.What you want is clustering. You can set up a web cluster, you don't need to share vdisks for that, this is all built into your Web solution, if it's not you need a better web solution. IIS is perfect for this, because it supports Microsoft Clustering and the web pages can 'share' each other through the network you don't need to share the data disks.

hstagner
VMware Employee
VMware Employee

Hello rightfoot,

First, whether fibre is faster than local depends. 4Gb Fibre-Channel drives are going to be faster than SAS or Ultra 320 SCSI drives (drive for drive).

About the drive sharing. A Raw Device Mapping is a raw disk mapped to a VM (not a vdisk). A datastore is a physical disk formatted as a VMFS volume or an NFS mount. The two are not the same. The reason I make this distinction is because of this line:

"I also notice that the

vdisk I'd like to use isn't shown in the Raw Device Mappings once I've

already added it to ESX's as a datastore."

You don't add vdisks as a datastore, you create vdisks on a datastore. If you want to do a Raw Device Mapping, then you need a raw, physical LUN to map to the VM. The ability to share the disk at the same time among multiple VM's depends entirely on the filesystem that you put on the disk. If it's Windows, NTFS is not a clustered file system so the VM's will not have simultaneous read/write access to the same disk.

I hope this helps.

Don't forget to use the buttons on the side to award points if you found this useful (you'll get points too).

Regards,

Harley Stagner

----------------------------------------- Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too). Regards, Harley Stagner VCP3/4, VCAP-DCD4/5, VCDX3/4/5 Website: http://www.harleystagner.com Twitter: hstagner
0 Kudos
rightfoot
Enthusiast
Enthusiast

I think you gave me a similar tutorial in another post I made :). Certainly interesting and enlightening but doesn't answer my question. Yes, I've build a GFS/FC based cluster so understand the concept but what I'm asking is, how do I share a datastore, or really, a fc channel virtual disk between guests so that no one guest owns it, much like say a GFS cluster for example. That's what I need.

Mike

0 Kudos
rightfoot
Enthusiast
Enthusiast

I think I've learned so much on my own that I don't know the industry terms which is what gets me into trouble when asking questions. Perhaps vdisks is the wrong term because it's a term which means creating virtual disks on one of my FC storage devices.

Basically, I have fibre channel network storage which I've segmented into various pools for various functions as usual. When using physical servers, I simply installed an FC HBA in each server, hooked them up to the FC network and created a GFS cluster of machines. All machines have access to the same storage pool/s so with load balancing in front of them, reliability was very high.

Now, I'm wanting to convert some of these servers to vm guests and would like to do away with the GFS cluster, instead, simply having a shared storage area which the vm web servers can server their pages from. Currently, they server their stuff from NFS servers since I've just started playing with this and haven't found a way of sharing an FC disk between guests. I don't need the GFS sharing capabilities in this case, just a shared pool but as I say, wanting to use a FC disk space.

So, that's my question. Without having to get into a shared filesystem such as GFS, it was looking to me as if ESX might have some features which could allow the sharing of what is in essence a DataStore between guests. I thought this because I see options which seem to mention that guests can share storage. I suppose I could always have one guest export the storage to the web server guests but then, that's why I was also talking about bandwidth and such which it would seem, I also didn't express quite what I was meaning :).

Anyhow, the question is really only about sharing a FC disk between guests, but as above.

Mike

0 Kudos
rightfoot
Enthusiast
Enthusiast

By the way, your description of FC compared to Ethernet is not going to waste. While I've been using FC for a number of years now, I've never really thought about it in the way that you explain. I want to re-read this and get a better handle on the way you describe it. I'm using 1/GB2GB and 4GB FC storage arrays and maximize their use in the ways that I understand them which isn't as you've described so, useful for sure.

Mike

0 Kudos
hstagner
VMware Employee
VMware Employee

Hello Mike,

I am not aware of any way to share raw or virtual disk to achieve what you want without the second piece that is missing. That is the clustered file system. Just because a disk is shared does not mean that multple VM's can read and write to that disk at the same time. This is where the clustered file system comes in. Clustered file systems usually work by using file (instead of LUN) locking mechanisms. The reason that multiple ESX hosts can share the same data store is because VMFS is a clustered file system that uses a file locking mechanism. The alternative (which you already alluded to) would be to export a file system to the VM's (like NFS).

If you go the NFS route, you could export it to your VM's (on a second vNIC) over a private (host-only) network inside of your ESX server to avoid network bottlenecks (the vSwitch with the host-only network will transfer packets at near memory speed). This obviously will not work if your VM'sare on separate hosts, but it is a thought.

Don't forget to use the buttons on the side to award points if you found this useful (you'll get points too).

Regards,

Harley Stagner

----------------------------------------- Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too). Regards, Harley Stagner VCP3/4, VCAP-DCD4/5, VCDX3/4/5 Website: http://www.harleystagner.com Twitter: hstagner
rightfoot
Enthusiast
Enthusiast

My current setup, since moving some of the web servers onto ESX for testing is that each web server simply has an NFS share to a very fast filer.

I understand the concept of locked files for individual writes and shared FS's since I mentioned that I've been using GFS for quite some time :).

My interest is that it seemed that I could create a DataStore from the FC network storage, and share it amongst some guests. If they can't have write access, that might not be a problem that I can think of since it's all database writes so need only to read in the php pages etc.

This is what I am interested in. Using a shared pool of FC storage vs an Ethernet based NFS or any other such share on the network.

Mike

0 Kudos
rightfoot
Enthusiast
Enthusiast

So I guess the answer is no, other than using Ethernet based shares or giving the guest access to the FC HBA on the host in order to create say a shared cluster.I'll start another post to reflect these findings and ask the next question.

Thanks.

Mike

0 Kudos