VMware Cloud Community
sim
Contributor
Contributor

NPIV / VWWN advantages

I would like to know the advantages of using a Virtual WWN for a Virtual machine. I understand that the VM can't have exclusive access to a storage LUN using its WWN as the ESX Servers on which the VM can potentially exist will also have access to the LUN.

This makes me wonder if there is a real advantage in having a Virtual WWN to the VM.

Thanks,

-Sim

0 Kudos
9 Replies
Andrew_Judge
Contributor
Contributor

If you deal with FC SAN administration, FC is made for storage and simpler as compared to iSCSI. Below is a snippet from an emulex whitepaper on NPIV.

Best regards,

Andrew Judge

MCSE: Security, RHCE, ACHDS, ACTC, CSSA, CCA, NCIE, A, Security, 3Com VoIP, VCP, VSP, DCIE

CEO, Grove Networks Inc.

Microsoft Gold Certified Partner

Certified Apple Consultant Network Member

3Com Voice Authorized Partner

Silver Citrix Partner

VMware VIP Enterprise

DataCore Authorized Business Partner

Phone: 305.448.6126

Fax: 305.437.7685

http://www.grovenetworks.com

- I/O throughput, storage traffic and utilization can be tracked to the virtual machine level via the WWPN, allowing for application or user-level chargeback. As each NPIV entity is seen uniquely on the SAN, it is possible to track the individual SAN usage of a virtual server. Prior to NPIV, the SAN and ESX Server could only see the aggregate usage of the physical FC Port by all of the virtual machines running on that system.

- Virtual machines can be associated to devices mapped under RDM to allow for LUN tracking and customization to the application needs. SAN tools tracking WWPN could report virtual machine specific performance or diagnostic data. As each NPIV entity is seen uniquely on the SAN, switch-side reporting tools, and array-side tools, can report diagnostic and performance-related data on a virtual machine basis.

- Bi-directional association of storage with virtual machines gives administrators the ability to both trace from a virtual machine to an RDM (available today) but also be able to trace back from an RDM to a VM (significantly enhanced with NPIV support).

- Storage provisioning for ESX Server hosted virtual machines could use the same methods, tools, and expertise in place for physical servers. As the virtual machine is

once again uniquely related to a WWPN, traditional methods of zoning and LUN masking could continue to be used.

- Fabric zones can restrict target visibility to selected applications. Configurations which required unique physical adapters based on an application can now be remapped on to

unique NPIV instances on the ESX Server.

- Virtual machine migration supports the migration of storage visibility. Access to storage can be limited to the ESX Server actively running the virtual machine. If the virtual machine is migrated to a new ESX Server,no changes in SAN configuration would be required to adjust for the use of different physical Fibre Channel ports. Additionally, the ESX Server requirement to open-zone all ESX Servers that may host the virtual machines, which also means all systems have access to all storage, is no longer required.

- HBA upgrades, expansion and replacement are now seamless. As the physical HBA WWPNs are no longer the entities upon which the SAN zoning and LUN-masking is based, the physical adapters can be replaced or upgraded without any change to SAN configuration.

Simply stated:

-NPIV will enable storage and SAN fabric administrators to manage connections from virtualized machines in the same way, and with the same tools, as traditional physical

hardware based servers.

0 Kudos
mreferre
Champion
Champion

Sim,

as Andrew said the main "advantage" of using NPIV is what Andrew mentioned:

>NPIV will enable storage and SAN fabric administrators to manage connections from virtualized machines in the same way, and with the same tools, as traditional physical

>hardware based servers

What I question is whether SAN fabric admins should have a "right" to do that .... or should they just move on and realize that the world is changing (and for good reasons) ?!..... Being able to do something the same way you have been doing that forever ..... doesn't mean it's absolutely the best way.

More here on my view:

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
Gladi8or
Contributor
Contributor

Storage administrators, in order to effectivly manage virtual hosts the same as physical hosts, should have visibility to virtuals the same as physical. The advent of NPIV alows that. I completely disagree with the artical mentioned. While the world of virtualization alows extream flexability with system resource utilization, it will never be extended into storage the same way it has be applied to server HW. The simple fact remains, while you can virtualize storage managment, you can't virtualize storage itself. Thin provisioning comes close but a gig is a gig is a gig. You can de-dup it, but you still can't fake storage.

0 Kudos
mreferre
Champion
Champion

> .... in order to effectivly manage virtual hosts the same as physical hosts...

This is exactly what I am challenging. The fact that Storage Admins have been doing that in the past doesn't necessarily means it's the best thing to do in the future too.

If I look 10 years back in the PC Server deployment space we were doing things that most people on this board would laugh at right now after all.

Obviously this is just my opinion and you might have different points of view.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
Gladi8or
Contributor
Contributor

So by challenging, you have some sort of idea or concept of how it can be done differnetly?

0 Kudos
mreferre
Champion
Champion

I am not sure if I follow you. My view is that virtual machines are going to be treated more like "applications" on top of a "datacenter / stripped down OS" so to speak. See here:

I have come across many situations where customers that used to have more than one app for a single Windows server are now virtualizing their infrastructure and creating a "vm per service". Like this:

Since we did not bind a WWN today to the applications... why would we want to bind a WWN to a vm if this is the case?

However, let me put it this way: for a customer that have "strategically" (not tactically) chosen to virtualize their entire x86 server infrastructure and bought into the overall concept.... what is the difference in dealing with a physical ESX server with multiple vm's compared to what was dealing with a Windows server with multiple services / applications? I think I am really missing what you loose (from a SAN perspective) doing this. I would be led to think that ESX NPIV support is there more to comply with a "legacy mind-sets" than is to overcome technical limitations.

But as I said this is my view and I understand you might have a different opinion / requirement.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
Gladi8or
Contributor
Contributor

What if your "application" VM had specific storage requirments for either bulk or performance needs. Wouldn't it be better to assign the storage to the application which needs it instead of to every potential ESX host in which it could live? Because you have given it a specific identity, the it will always be able to connect to its storage regardless of the esx host and not suffer from the "shared" disk concept the ESX file system imposes. Also, by giving it the ability to communicate directly with the storage array's, you can do performance monitoring all the way into the application. If you just assign it to the ESX hosts, how would you know if the application is having an issue or if its the ESX hosts which is causing problems?

The fact of the matter is, true storage professionsals will always prefer granular control of allocation and usage. The idea of haveing a single large bucket of space which all services play in is a recipe for performance degregation and oversubscription. Some admins prefer that because its easy to manage, but troubleshooting is impossible. Using your idea for storage its just like saying "Just give the ESX host an IP and allow the application VM's to open sockets.". Now we both know what a headache that would be to manage.

0 Kudos
mreferre
Champion
Champion

Isn't this the same problem you face today with say (as an example)..... a consolidated SQL server cluster node with 3 instances each of which supports a 8 different databases? You have 2 x HBA's that gets into a server that is running 24 databases and one has a I/O issues ..... what do you do? To me this sounds very similar.

To your points:

> Wouldn't it be better to assign the storage to the application which needs it instead of to every potential ESX host in which it could live? Because you have given it a specific identity, the it will always be able to connect to its >storage regardless of the esx host and not suffer from the "shared" disk concept the ESX file system imposes.

Well.....no. The idea at the basis of VI3 is that you decouple storage, network and computetional resources. So no I don't want to "assign a piece of storage to the application that needs it" but I'd rather "assign storage to systems where applications can run on and move around". I might agree that a cluster file system has its challanges but changing on the fly a virtual WWN assignment from one ESX host to another .... are we sure it's less challenging than dealing with a (very efficient already) cluster file system that will improve anyway over time?

>Also, by giving it the ability to communicate directly with the storage array's, you can do performance monitoring all the way into the application. If you just assign it to the ESX hosts, how would you know if the application is >having an issue or if its the ESX hosts which is causing problems?

See the SQL example above. Also... assuming you set a specific WWN all the way into a VM .... how do you make sure that the problem is caused by the vm and not by the overhed the hypervisor is imposing or viceversa? Today we do have physical access to the CPU but I can assure you that it's always a guess when you try to troubleshoot CPU performance issues in a virtual environment. Creating a virtual WWN and assigning it to a vm ... I don't this it's going to solve all the problems.

>The fact of the matter is, true storage professionsals will always prefer granular control of allocation and usage

Well I unfortunately agree.. it's like true ISV professionals that will always suggest to use physical hardware rather than a "virtualization technology". This doesn't mean that for the sake of the company running physical servers is better than running virtual machines.

All in all the point here is that there is a deep change in how we operate the datacenters. I have seen very similar discussions years ago when people working in the Systems Management area where complaining that monitoring the CPU of a Windows machine using Windows standard tools that they have been using for 10 years .... was broken: "How can I tell if the 80% of CPU usage in this guest is due to a real bottleneck of a physical processor... or is due to a "cap" the ESX admin has put on the vm (which is in fact using just 5% of a real CPU)?"

Big problem ... as long as you do things like you used to do in the last 10 years.... As soon as you start looking at alternatives way to determine bottlenecks and the sort (i.e. via hypervisor instrumentation etc etc) and correlate the data.... it's ok.

It's always a compromise. Decoupling storage, network and computational resources has so many advantages ... that a legacy like WWN mapping can't stop it. I am not saying all the pieces are already aligned .... obviously there is a lot to do .... that's why I consider ESX NPIV support a tactical technology .... and not a strategic thing.

Again ... at least this is my opinion.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
TheRealJason
Enthusiast
Enthusiast

I can see into both sides of this discussion. Currently, I am both the Storage Admin, and the VMWare Admin. I am considering using NPIV for a QA SQL Cluster that is currently managed by ESX. I find that it causes some "frustrations" when using RDM through ESX. I have not used NPIV at all before this point, but am beginning to look into it.

We also use RDMs for some of our large storage systems. One of them is an enterprise archiving solution. Between 2 servers, there are probably 8 RDMs. It is a little more confusing having to manage these from the ESX side, and I am considering also switching these to NPIV. It definitely gives me better insight into which LUNs are assigned where from a quick look into the Array Manager, and I think it will also benefit whoever comes in behind me.

On the other hand, for our file servers that have a single large LUN mapped, I will probably continue to use ESX managed LUNs.

I have no real world experience with NPIV yet though, so maybe me thoughts will change!

Jason

0 Kudos