VMware Cloud Community
grp
Contributor
Contributor
Jump to solution

Deploy ESXi 5 with HP P2000 G3

Hello all.

I am trying to deploy ESXi on our new equipment, which is:

HP DL360 G7

HP P2000 G3 iSCSI 10GB

HP NC522SFP iSCSI adapter

I haven't been able to connect ESXi to the array so far. It seems possible only when I add a 'software iSCSI adapter'. Is this necessary or not? Here are my steps so far:

1. Added new vSwitch with VMkernel connection type

2. Included the two data paths of my iSCSI card (listed as vmnic4 and vmnic5)

3. Go to 'storage adapters'. Strangely I see 4 devices (vmhba33-vmhba36). I would expect two if I'm not mistaken.

4. Go to properties of each one.

5. Go to 'Network configuration'

6. Add port binding

At this point I do not see my storage interfaces.

If I repeat this procedure after adding software iSCSI, and try to add bindings to it, I see my 10GB interfaces.

Sorry if not being clear enough. I will be happy to provide more details. In general, if there is a VMware document explaining how to deploy it in my case, I would appreciate some link.

Also, I saw that -even ESXi seems to recognize the adapter' HP provides another driver. Tried to install this driver and saw no difference. Inside storage adapters, it is still being identified as 'Broadcom iSCSI adapter'

Thanks

0 Kudos
1 Solution

Accepted Solutions
Josh26
Virtuoso
Virtuoso
Jump to solution

grp wrote:

I see. Strange though, since this card was recommended/supplied by HP itself. It seems like a low-end solution (although the price was not very low-end....), the way you describe it. So, does having this card mean we will face a performance reduction (when compared with an HBA)? In plain words, does this mean our ESXi host will run its VMs more slowly, since it will also have to handle iSCSI traffic?

Not at all.

Software iSCSI, in ESXi, has been shown in numerous places to be as fast, if not faster, than using iSCSI offload in hardware cards.

Install the software adapter - it is the correct, and generally recommended way to provision this server.

View solution in original post

0 Kudos
23 Replies
zXi_Gamer
Virtuoso
Virtuoso
Jump to solution

Are you trying with the ISO provided by VMware or the ISO provided by HP. Somewhere earlier in the forums, I had come across a thread where HP iso recognises its adapter and provides iscsi driver by default for the card, while vmware ISO provides only network card driver for the functionality

0 Kudos
grp
Contributor
Contributor
Jump to solution

I have tried both images, the behaviour seems the same.Generally speaking, I am not sure what is the best practice to connect my storage in my case. As I said, I have only one controller interface (with two iscsi ports), and two iscsi cables. My iscsi card on the server also has two ports. So I was wondering if any kind of fault tolerance can be achieved.

I was able to connect to my storage, and even create VMs. The only thing is that I had to add 'Software iscsi'. I found a document mentioning that this has to be done. I just want the community's opinion, as to if I am moving towards the right direction...

0 Kudos
grp
Contributor
Contributor
Jump to solution

Anyone?

0 Kudos
grp
Contributor
Contributor
Jump to solution

Still haven't found a workaround. I would appreciate some response.

0 Kudos
peetz
Leadership
Leadership
Jump to solution

I'm confused. The NC522SFP is not an iSCSI adapter, but a standard 10GbE adapter.

The four iSCSI adapters that you are seeing are probably the iSCSI personalities of the builtin 1GbE Broadcom NICs.

That means if you want to use iSCSI over 10Gb then the software iSCSI adapter is your only choice.

- Andreas

Twitter: @VFrontDe, @ESXiPatches | https://esxi-patches.v-front.de | https://vibsdepot.v-front.de
grp
Contributor
Contributor
Jump to solution

Thank you for your reply. Indeed, since the initial post I was able to make clear that the 4 ports are the ethernet ports. However, I did not make the distinction between iSCSI and 10GbE until you have mentioned it. What I did know, was that our storage array (P2000 G3) has two iSCSI ports on the rear. From those two iSCSI ports, there are two SFP cables going to the NC522SFP. So I guessed the adapter can be referred as 'iSCSI' adapter. What is the difference if we call it 10GbE adapter? Please clarify.

In any case, and based on your post, what is the recommended way to connect this array with this card? I think that's the main issue. Also based on your post: Is there a scenario where I could use 10Gb speed without using iSCSI?

I am confused too. Now more than before Smiley Happy

0 Kudos
sparrowangelste
Virtuoso
Virtuoso
Jump to solution

grp wrote:

Thank you for your reply. Indeed, since the initial post I was able to make clear that the 4 ports are the ethernet ports. However, I did not make the distinction between iSCSI and 10GbE until you have mentioned it. What I did know, was that our storage array (P2000 G3) has two iSCSI ports on the rear. From those two iSCSI ports, there are two SFP cables going to the NC522SFP. So I guessed the adapter can be referred as 'iSCSI' adapter. What is the difference if we call it 10GbE adapter? Please clarify.

In any case, and based on your post, what is the recommended way to connect this array with this card? I think that's the main issue. Also based on your post: Is there a scenario where I could use 10Gb speed without using iSCSI?

I am confused too. Now more than before Smiley Happy

so your cards arent HBAs where the iscsi traffic is offloaded to the cards, but ethernet cards so the processing has to be done by the system. in this case the esxi host.

that is the difference between a iscsi adaptor (hba) and software iscis adaptor aka your 10gbe nic

just connect via softare iscsi and call it a day.

it works fine.

--------------------- Sparrowangelstechnology : Vmware lover http://sparrowangelstechnology.blogspot.com
Mogicrz
Contributor
Contributor
Jump to solution

I will assume that you have an iSCSI P2000 which will be AW596A (LFF) or AW597A (SFF). Doesn´t mater which one, since controlers are the same. The only way you can connect to it is using iSCSI. You cannot connect using SAS or FC (unless you have controlers combo versions AW567A or AW568A).

If you have HP DL360 G7 you do have (not may) two HP NC382i controlers with 2 RJ45 each, wth 4 total ports. Plus that you saif you have an HP NC522SFP 10GB two SFP ports. Using DAC you may connect it by copper, or use SR, LR or LRM for fiber. In your case you are using fiber becouse the P2000 controler is fiber.

Your Host/Configuration/Netword Adapters will show 6 NIC adapters from 0 up to 5. Figure out which one are 10GB and use those one to create you iSCSI storage adapters.

0 Kudos
grp
Contributor
Contributor
Jump to solution

sparrowangelstechnology wrote:

so your cards arent HBAs where the iscsi traffic is offloaded to the cards, but ethernet cards so the processing has to be done by the system. in this case the esxi host.

that is the difference between a iscsi adaptor (hba) and software iscis adaptor aka your 10gbe nic

just connect via softare iscsi and call it a day.

it works fine.

I see. Strange though, since this card was recommended/supplied by HP itself. It seems like a low-end solution (although the price was not very low-end....), the way you describe it. So, does having this card mean we will face a performance reduction (when compared with an HBA)? In plain words, does this mean our ESXi host will run its VMs more slowly, since it will also have to handle iSCSI traffic?

0 Kudos
sparrowangelste
Virtuoso
Virtuoso
Jump to solution

grp wrote:

sparrowangelstechnology wrote:

so your cards arent HBAs where the iscsi traffic is offloaded to the cards, but ethernet cards so the processing has to be done by the system. in this case the esxi host.

that is the difference between a iscsi adaptor (hba) and software iscis adaptor aka your 10gbe nic

just connect via softare iscsi and call it a day.

it works fine.

I see. Strange though, since this card was recommended/supplied by HP itself. It seems like a low-end solution (although the price was not very low-end....), the way you describe it. So, does having this card mean we will face a performance reduction (when compared with an HBA)? In plain words, does this mean our ESXi host will run its VMs more slowly, since it will also have to handle iSCSI traffic?

Where it might have been a conern when iscsi first came out i think processing power has evolvoed far enough so that the actual difference is negligable.

--------------------- Sparrowangelstechnology : Vmware lover http://sparrowangelstechnology.blogspot.com
0 Kudos
Mogicrz
Contributor
Contributor
Jump to solution

You allways can, with correct hardware, use DirectPath and Direct I/O to bipass hypervisor and have the maximum troughput the hardware may achieve.

0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

grp wrote:

I see. Strange though, since this card was recommended/supplied by HP itself. It seems like a low-end solution (although the price was not very low-end....), the way you describe it. So, does having this card mean we will face a performance reduction (when compared with an HBA)? In plain words, does this mean our ESXi host will run its VMs more slowly, since it will also have to handle iSCSI traffic?

Not at all.

Software iSCSI, in ESXi, has been shown in numerous places to be as fast, if not faster, than using iSCSI offload in hardware cards.

Install the software adapter - it is the correct, and generally recommended way to provision this server.

0 Kudos
grp
Contributor
Contributor
Jump to solution

Thank you all. I have one more question.

Is it better approach to provision my storage as a large disk (such that ESXi sees only one large datastore), or is it better practice to divide my storage to smaller LUNs, each for one guest?

Selecting the second approach, I think is not practical because we can't know beforehand how many guests we will create. However, creating only one large datastore, makes me wonder about the following:

1. We will be running mainly CentOS and Windows guests. If I create one VM with a 2TB size (our total will be 3TB), is this space pre-allocated?

2. Can I resize the disks via ESXi without having to do anything inside the guest OS?

As you can understand, if space is not pre-allocated, and I can resize the disk, it would be preferable, because I wouldn't have to create such a large space from the beginning, and perhaps I would be able to resize it in the future as needed.

Any comments/suggestions would be nice.

Thanks

0 Kudos
Mogicrz
Contributor
Contributor
Jump to solution

I do not know exactly what your environment, but if you follow the lines below, you'll get the most from your hardware:

1-On the storage side, create the largest possibile volume.

2-On VMs´ side, no matter if CentOS or Windows, create the initial volume required to accommodate all your data with a break already able to accommodate any unexpected growth.

3-Use THIN provision when creating disks and grow them as needed. If you enlarge VMs disk size, you will need to enlage disk size inside VM OS. I don´t know CentOS, but with Windows is extremely easy to do that.

0 Kudos
grp
Contributor
Contributor
Jump to solution

I am not sure what you mean by 'thin provisioning' in this case. Could you explain further or recommend some resource on this?

In general I have done the folliowing:

1. Created 1 vSwitch

2. Created 2 VMKernels, each for one port of the storage controller.

3. Performed dynamic discovery. I can see my storage.

However I did not create VMKernel Port Bindings. Are these necessary or not?

Also, in order to enable Jumbo frames, I increased MTU to 9000 under vSwitch. Do I have to do this on VMKernels as well?

0 Kudos
Mogicrz
Contributor
Contributor
Jump to solution

After all done, when you are creating your VMs, there is an option under "create a disk" (where you decide the size of the disk) and below you will see 3 options: Thick Provison Lazy Zeroed, Thick Provision Eager Zeroed and Thin Provision. Chose Thin.

Both Thick when you choose by excample 100GB, will create and USE 100GB on your storage. After you setup your OS with, lets say, 6GB, the used space will be 100GB. On the other hand, on Thin when you create 100GB it will create 1GB and RESERVE 100GB. After you setup your OS with the same 6GB, the used space will be something among 7 and 8GB on you storage. What that means? At the start with 1TB storage you may setup, by example, 4 VMs with 500GB each (watch the real used space later!).

I allways create VMkernel Port for the storage with an exclusive IP address for it and and VLAN, vMotion, etc as needed. If you increase MTU to 9000 in one conection, makes sense increase it on the other one.

0 Kudos
grp
Contributor
Contributor
Jump to solution

Thank you all for your replies. One more thing. Since Mogicrz recommended it, I have noticed that my system supports DirectPath.So I was thinking about enabling it. I would like to ask, is a simple step like enabling it in Advanced settings for my two 10GBe interfaces enough to get it work properly? Or do I have to do other things as well? For example, are all the vmx files automatically re-configured by ESXi to support passthrough?

0 Kudos
Mogicrz
Contributor
Contributor
Jump to solution

One thing for sure you need to do is enable NIC teaming in your switch for load balancing and also under storage you may also enable Storage I/O Control.

VMware DirectPath I/O is a technology, available from vSphere 4.0 and higher that leverages hardware support (Intel VT-d and AMD-Vi) to allow guests to directly access hardware devices. In the case of networking, a VM with DirectPath I/O can directly access the physical NIC instead of using an emulated (vlance, e1000) or a para-virtualized (vmxnet, vmxnet3) device. While both para-virtualized devices and DirectPath I/O can sustain high throughput (beyond 10Gbps), DirectPath I/O can additionally save CPU cycles in workloads with very high packet count per second (say > 50k/sec). However, DirectPath I/O does not support many features such as physical NIC sharing, memory overcommit, vMotion and Network I/O Control. Hence, VMware recommends using DirectPath I/O only for workloads with very high packet rates, where CPU savings from DirectPath I/O may be needed to achieve desired performance.

0 Kudos
grp
Contributor
Contributor
Jump to solution

As far as your first section, I now am really confused. When created the structure in the first place, documentation recommended that I should create two vmkernels, and disable NIC teaming. So I did. In any case, I find it more easy to attach here some screenshots, to provide a clear image of what i've done. So please have a look and let me know if I did it all right. About storage I/O control, I am not sure what this is. Does this have to do with data paths?

Also, about directpath. At first, in our case it is unlikely that we will proceed in upgrading to vSphere. We will be having just ESXi hypervisors (due to lack of licensing), but also because we don't really need it. Our infrastructure at its best will have 2 or 3 ESXi's, each one running 20 or so VMs. Our current hardware has 16gb of memory, 3TB of storage space (see my initial post), and currently runs 8 VMs. In that aspect, I really don't think we have excessive demands as far as performance is concerned. Our VMs are active enough, but not that active to be having performance issues. Our busiest VM is a mail server, having about 1500 users with a 1GB mailbox each.

On the other hand, I wouldn't want to know that we don't take full advantage of our hardware's capabilities, even if not needed. To put it simple: I am not SURE if we need directpath. That's what I need to know in the first place. Sorry for being so uncertain.

0 Kudos