VMware Cloud Community
rickardnobel
Champion
Champion

Was not 8Gb FC supported before 4.1?

When reading the release notes for vSphere 4.1 I noticed a part saying that 8Gb FC arrays now are supported. It has not been supported before, only 4 Gb?

I seem to have seen before that certain 8Gb FC HBAs were supported, but apparently not from the SAN side?

My VMware blog: www.rickardnobel.se
0 Kudos
17 Replies
weinstein5
Immortal
Immortal

They might have been experiementally supported or close to enough to 4 GB HBAs that the drivers worked for those -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
rickardnobel
Champion
Champion

But it is correct that 8 Gb FC has not been available on ESX/ESXi before?

My VMware blog: www.rickardnobel.se
0 Kudos
weinstein5
Immortal
Immortal

That is correct 8 GB FC was not supported priot to 4.1 -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
rickardnobel
Champion
Champion

Is FC switches, HBAs and/or SAN arrays at 8 Gb uncommon, or do you know why there has been no support?

My VMware blog: www.rickardnobel.se
0 Kudos
weinstein5
Immortal
Immortal

They are - it is still a relatively new technology and woudl require a major change to an organizations FC fabrice to move to 8 GB -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
rickardnobel
Champion
Champion

Interesting. So 4 Gb is the default today. Could there also exist lower speeds than this?

My VMware blog: www.rickardnobel.se
0 Kudos
weinstein5
Immortal
Immortal

IMHO that there is still a larger install base of 1 and 2 GB FC than 4 GB because 4 GB faced the same issue that 8 GB - it is expesive to upgrade particularly when in most situations their SAN transport speeds are being met by their existing infrastructure - when they refresh the technology then yes they will move to 4 GB and if the economics are there to 8 GB FC

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
rickardnobel
Champion
Champion

IMHO that there is still a larger install base of 1 and 2 GB FC than 4 GB

Thanks a lot for your answers. I find it very interesting to hear.

The install base you talk about above, is that general customers or Vmware based customers? As in, would a 2 Gb FC work even in a vSphere enviroment?

Some quick counting gives that with 2 Gbit/s would give a theoretical maximum throughput of 268 MB/s, not including any protocol overhead. Since a good single 7200 RPM SATA drive these days can deliver over 100 MB/s, should not a SAN with many spindles and higher RPM greatly exceed this and well above 268 MB/s? That is, could not the throughput of "only" 2 Gb be a bottleneck?

My VMware blog: www.rickardnobel.se
0 Kudos
weinstein5
Immortal
Immortal

I do not have emperical data but the install base I am referring to is FC SANs in general - because I have been to a wide range of organizations with FC SANs and they still seem to be running 1 to 2 GB FC not wanting to spend the money to upgrade their bandwidth and I agree yes 2 GB can be a bottleneck if the Fabric is not designed correctly -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
RParker
Immortal
Immortal

I seem to have seen before that certain 8Gb FC HBAs were supported, but apparently not from the SAN side?

Yes it was supported, how can they NOT support it when there was no change in the protocol? Maybe all the devices weren't on the list, but the SPEED of the technology isn't a limitation of the software, in fact it shouldn't even be a consideration. If nothing else changes on the hardware, and the speed doubles, how do you say you can't support it?

That's kind of a "duh" response... Of COURSE 8gb was supported before now. They must have meant that the devices which support this speed may be fully supported.

0 Kudos
RParker
Immortal
Immortal

That is correct 8 GB FC was not supported priot to 4.1 -

Which is puzzling because it required NO modification whatsoever from VM Ware to "support" it.. it just got faster, but nothing was "CHANGED" so why would 8Gb be restricted?

0 Kudos
RParker
Immortal
Immortal

They are - it is still a relatively new technology and woudl require a major change to an organizations FC fabrice to move to 8 GB -

A change to the infrastructure yes, but to the VM Ware technology, uh... NO! Besides change to 8GB fabric.. is a very SIMPLE change.

You swap the switches, from 4GB backbone to 8GB, it's not really any more simple than that... despite the HBA only being 4GB, you are NOW on 8GB fiber...

0 Kudos
RParker
Immortal
Immortal

Interesting. So 4 Gb is the default today. Could there also exist lower speeds than this?

1GB, 2GB, 4GB, 8GB, even talk of moving to 16GB by the end of 2010, but more likely 2011.

All of which are compatible with lower speeds, but 1GB fiber will NOT work with 8GB and beyond Fabric, they are not downward compatible to 1GB any longer. We have a 8GB switch, with 8GB Gbics, if we choose 4GB Gbics THOSE will work with 1GB.. but who has 1GB any more?

And just FYI.. 2GB is still the standard.. 4GB is relatively new still.. Also MOST I would even venture to say all but a very few can even use ALL the bandwidth on 2GB fiber.. That is a TON of throughput. You would really have to do a LOT of simultaneous copies and databases to touch 2GB and saturate it.. so 4GB is even way beyond what people are doing now.. so 8GB is many years (if ever). The NEW hardware has 8GB, but that doesn't mean people will actually USE all of it.

RParker
Immortal
Immortal

Some quick counting gives that with 2 Gbit/s would give a theoretical maximum throughput of 268 MB/s

Fiber does not work like streaming data on Networks. Speed on 4GB fiber is more like 100MB/s there are just more lanes for that data to travel on.

Network is about speed point to point. Fiber is about WIDTH. 4 cars all full bringing data arriving at the same time will be a LOT more efficient than a line of cars with smaller loads getting there faster...because they have to be taken in order.

So traffic on a Fiber highway travels at 60 MPH. A high speed rail in the middle of that highway is a NIC highway, running at 120 MPH, but that's 1 lane of traffic, going real fast.

Fiber get's MORE data to the endpoints at the same speed, thats how you get MORE data to the destination, it's not about speed, it's bandwidth. That train may get there faster... but to carry the SAME amount of traffic it has to wait for the entire train to get to the destination. Fiber goes slower, but 2 or 4 or 8 cars will get there ALL at the same time.

Fiber is pure light, dedicated, there are virtually NO collisions happening in Fiber, that's why it's so much better than conventional Ethernet networks.

Plus even 20 or 30 disks will not yield THAT much throughput, SSD drives MAYBE but the bottle neck now is the software, not protocol, not hardware, not disks (but disk are still a LONG way behind the others), but the software that handles the flow cannot keep up.

Enter 64-bit... that will change, but it will take a WHILE for the companies to take advantage of 64-bit processing power.

0 Kudos
weinstein5
Immortal
Immortal

I agree it is as simple as switching out the switches but if te company is not willing or ready to incur the cost they will stay at 4 GB -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
rickardnobel
Champion
Champion

Hello and thank you for your replies.

Some quick counting gives that with 2 Gbit/s would give a theoretical maximum throughput of 268 MB/s

Fiber does not work like streaming data on Networks. Speed on 4GB fiber is more like 100MB/s there are just more lanes for that data to travel on.

Network is about speed point to point. Fiber is about WIDTH.

I am not totaly sure if I understand. How is the "width" implemented in a FC network? Is there not a single optical line inside each fiber cable? And the speed that both sides can signal a 1 or 0 a certain numbers per second is the amount of Gbs?

I do not know at all, I am just asuming that a fibre channel cable is a point to point device to, just as a network cable is?

Fiber is pure light, dedicated, there are virtually NO collisions happening in Fiber, that's why it's so much better than conventional Ethernet networks.

But there was a long time since there was any collisions on a Ethernet networks. It must be something else that makes FC better, is so.

Perhaps the HBA adapter that runs it business without disturbing the host CPU?

Plus even 20 or 30 disks will not yield THAT much throughput, SSD drives MAYBE but the bottle neck now is the software,

>not protocol, not hardware, not disks (but disk are still a LONG way behind the others), but the software that handles the flow cannot keep up.

Again I do not know, but 20-30 disks must together be able to deliver much much more than 286 MB/sec? Even if is of course is not common that all systems bursts at the same time there must still be a risk for a throughput bottleneck in the fabric at 2Gbs?

My VMware blog: www.rickardnobel.se
0 Kudos
rickardnobel
Champion
Champion

I am wondering if anyone has any input on the above? Would 2 Gbit/s FC be a bottleneck for an array with 20-30 disk?

Depending on the load of course, and average might or might not be high, but should it not be able to overload that on peak times?

My VMware blog: www.rickardnobel.se
0 Kudos