email_to_shashi
Contributor
Contributor

NPIV vSphere problem

I have 8GB Qlogic HBAs and 8GB brocade fabric.

1) Created a LUN on the HP EVA Storage, LUN masked to ESX WWNs.

2) Done the zonings in Brocade fabric consist of ESX WWNs and Controller WWNs.

3) On the vCenter, created a VM, chosen RDM, chosen above created LUN.

4) Edited VM settings, chosen NPIV option, generated VM WWNs

5) Added those VM WWNs on the HP EVA, and LUN masked to both VM WWNs and as well as ESX WWNs

6) Created a zonings with VM WWNs and EVA Controller WWNs in the brocade fabric

7) Powered-on the VM.

The VM is not using it's WWNs, the fabric is not seeing VM WWNs.

I am not understanding where the problem is ?

In the vmkernel log file, there are plenty of entries like bellow.

Please any one tell me how can i solve this issue ? Thanks. I have attached the vmkernel log file as well.

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.709 cpu6:4771)ScsiNpiv: 991: NPIV vport rescan complete, (0x4100022332c0) status=0xbad0001

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.710 cpu6:4771)ScsiScan: 843: Path 'vmhba96:C0:T1:L2': Vendor: 'HP ' Model: 'HSV450 ' Rev: '0952'

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.710 cpu6:4771)ScsiScan: 846: Path 'vmhba96:C0:T1:L2': Type: 0x1f, ANSI rev: 5, TPGS: 0 (none)

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.710 cpu6:4771)ScsiScan: 105: Path 'vmhba96:C0:T1:L2': Peripheral qualifier 0x3 not supported

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.710 cpu6:4771)ScsiNpiv: 991: NPIV vport rescan complete, (0x410002231600) status=0xbad0001

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.710 cpu6:4771)WARNING: ScsiNpiv: 1578: Failed to Create vport for world 4772, vmhba0, rescan failed, status=bad0001

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.723 cpu6:4771)WARNING: Removing Host Adapter vmhba96

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.723 cpu6:4771)ScsiNpiv: 1638: Vport Create status for world:4772 num_wwpn=2, num_vports=0, paths=16, errors=16

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.723 cpu2:4237)ScsiAdapter: 1907: Unregistering adapter vmhba96

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.723 cpu6:4771)VSCSI: 3472: handle 8194(vscsi0:0):Creating Virtual Device for world 4772 (FSS handle 147474)

Jan 26 21:10:41 srmesx2 vmkernel: 2:01:30:53.869 cpu13:4772)Init: 1215: Received INIT from world 4772

Jan 26 21:10:41 srmesx2 vmkernel: 2:01:30:53.881 cpu13:4772)LSI: 2486: Worldlet for virtualAdapterID = 0 created

Jan 26 21:10:41 srmesx2 vmkernel: 2:01:30:53.881 cpu13:4772)LSI: 2640: Async Polling ->#Initialized rings for VirtualLSIAdapter-0 async=1, record=0 replay=0

Jan 26 21:10:41 srmesx2 vmkernel: 2:01:30:54.076 cpu10:4773)Init: 1215: Received INIT from world 4773

0 Kudos
14 Replies
adamy
Enthusiast
Enthusiast

Make sure the Brocade has a firmware Rev that supports\enables NPIV.

If you set up the NPIV at the host and not the guest this makes things much simpler.

The configureation is not different than a non-npiv systems(At least with Brocade Switches and EMC storage.)

Once you configure the ESX Host with the LUNs you put the guest on the disk as you normally would.

If you need direct access to disk would you not use Raw Device Mappings instead.

Is there a reason you are setting up NPIV at the Guest level?

What Kind of Blades(assuming Blades as they are the most likely use of NPIV)

Adam

binoche
VMware Employee
VMware Employee

thanks for your detailed steps,

my guess 1) lun masked with lun number 2, but 5) did not also lun mask with lun number 2? please have a recheck, thanks

the below messages mean vport did not find lun 2,

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.710 cpu6:4771)ScsiScan: 843: Path 'vmhba96:C0:T1:L2': Vendor: 'HP ' Model: 'HSV450 ' Rev: '0952'

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.710 cpu6:4771)ScsiScan: 846: Path 'vmhba96:C0:T1:L2': Type: 0x1f, ANSI rev: 5, TPGS: 0 (none)

Jan 26 21:10:40 srmesx2 vmkernel: 2:01:30:53.710 cpu6:4771)ScsiScan: 105: Path 'vmhba96:C0:T1:L2': Peripheral qualifier 0x3 not supported

binoche, VMware VCP, Cisco CCNA

email_to_shashi
Contributor
Contributor

Hi Adam,

Brocade firmware is at 6.3. I am using HP blades.

The reason for using NPIV is for better storage utilization. If i use VMFS or/and RDM, i can't manage the storage per VM level, and also, using VMFS always brings me wastage of storage. We allocate forecasted storage to VMFS for next 2-3 years downline, and i can see the utilization never cross 70%.

So if use VPIV, my users can allocate storage to the VMs based on their current needs, not their forecasted needs.

Please can you explain your wording "If you set up the NPIV at the host and not the guest this makes things much simpler." . I did not understand.

Are you referring to configuring brocade switch in AG mode ? or something else. please explain.

Thanks.

0 Kudos
email_to_shashi
Contributor
Contributor

Hi binoche,

After reading your answer, i have deleted entire my storage and SAN configuration, and re-done the zoning and LUN masking.

Now looks like VM using it's WWN.

Thank you.

0 Kudos
binoche
VMware Employee
VMware Employee

vSphere VMFS has the feature thin provisioning, maybe it will not waste your storage;

so far vSphere only supports NPIV at the host not the guest level, and NPIV can not bring you much benefit, only complicated fabric configurations;

binoche, VMware VCP, Cisco CCNA

0 Kudos
adamy
Enthusiast
Enthusiast

The main benefit is Fewer cables and fewer Fabric ports used.

He is using HP C Class Blades and If you are using Virtual Connect SAN modules then it works very well.

I have 4 8GB FC ports running 32 Hosts in my new Farms.

The performance is fantastic. We are using PowerPath VE to make sure we are using all 32GB of FC bandwidth.

I am very pleased with the configuration.

0 Kudos
binoche
VMware Employee
VMware Employee

Thanks Adamy for sharing this info;

On vShpere, if NPIV still disabled, there is also fewer fc cables and fewer fc ports used,

0 Kudos
email_to_shashi
Contributor
Contributor

"NPIV can not bring you much benefit, only complicated fabric configurations"

That's where all my confusion is.

I did not got any helpful documents from vendors like HP and others which are positive about NPIV.

Only i can see HBA vendors and SAN switch vendors are recommending it.

0 Kudos
binoche
VMware Employee
VMware Employee

correct only on vSphere; vShpere still have the limit that vSphere should have the access to NPIV luns too

binoche, VMware VCP, Cisco CCNA

0 Kudos
adamy
Enthusiast
Enthusiast

On the C Class Chassis using the SAN Modules the setup is pretty strait forward.

You Mask the LUNs in the same way you did before. The difference is that you do not need a fiber Cable per HBA.

I have a doc somewhere on the setup for C class Chassis. Give me some time hopefully I can find it.

0 Kudos
zeml
Contributor
Contributor

Hello!

I need to create NPIV because I want to place the Veeam backup in the virtual machine and to make backup jobs only through SAN, not using ethernet.

I'll present to this VM all datastores where target VMs are placed and also I'll present some large LUN for proxy functions, to store backup files. From this LUN HP Data Protector will take files and put them to the tapes.

I made everything folowing this document http://www.brocade.com/downloads/documents/white_papers/white_papers_partners/NPIV_ESX4_0_GA-TB-145-...

But there are problems with NPIV - I dont see the VPORT in Service Console after typing command cat proc/scsi/qla2xxx/1

After creating |RDM to VM and generating virtual WWNs, should I see WWPN in SAN fabric automaticly discovered or add this manualy?

0 Kudos
RParker
Immortal
Immortal

I need to create NPIV because I want to place the Veeam backup in the virtual machine and to make backup jobs only through SAN, not using ethernet.

Veeam and Vizioncore do NOT support NPIV in a VM for backup. You might want to call them, but I am pretty sure this is NOT a supported configuration.

But there are problems with NPIV - I dont see the VPORT in Service Console after typing command cat proc/scsi/qla2xxx/1

You won't see the port in the console. NPIV is actually not a "virtual" port that get's created, it allows VM's to passthru the host switch config, so the VM's can directly access via an NPIV port.

You won't see any new hardware added to a VM nor will you get any new changes on the ESX host. It's ALL done at the switch level. So rather than setting up a Datastore on the host, you can have direct access inside the VM. That's basically ALL it does.

0 Kudos
zeml
Contributor
Contributor

Why u think that Veeam don't support NPIV? Why I can't mount some LUN to my veeam VM through NPIV, format it in NTFS and backup there my VMs..?

0 Kudos
Gostev
Enthusiast
Enthusiast

Yep, this should work with Veeam just fine.

0 Kudos