VMware Cloud Community
Paul_B1
Hot Shot
Hot Shot
Jump to solution

256 Max Lun Question

I did some quick searching, but didn't find a good answer. I was just curious what the technical reason behind having the limit of 256 for the "maximum LUN ID". Anyone care to shine some light on it?

0 Kudos
1 Solution

Accepted Solutions
kharbin
Commander
Commander
Jump to solution

The SANs LUN ID and the vmkernels internal LUN ID are two different things and should not be confused. At the kernel level each LUN ID is a pointer to a physical LUN, which on the SAN side can have a different ID. But the end result is you have a maximum of 256 internal pointers to the LUNs.

This is true for most every OS on the planet. Its why Qlogic cards, when you eneter the BIOS and scan the LUNs, only show max of 256. This is just something that is at the core of computing, nothing VMware specific.

View solution in original post

0 Kudos
15 Replies
sbeaver
Leadership
Leadership
Jump to solution

Because that is the maximum amount of LUNS that ESX can use

Steve Beaver
VMware Communities User Moderator
VMware vExpert 2009 - 2020
VMware NSX vExpert - 2019 - 2020
====
Co-Author of "VMware ESX Essentials in the Virtual Data Center"
(ISBN:1420070274) from Auerbach
Come check out my blog: [www.virtualizationpractice.com/blog|http://www.virtualizationpractice.com/blog/]
Come follow me on twitter http://www.twitter.com/sbeaver

**The Cloud is a journey, not a project.**
0 Kudos
kharbin
Commander
Commander
Jump to solution

Probably because from the early days of SCSI right till todays HBAs, LUN ID has always been a 2 byte hex value. 00-ff, for a total of 256.

Ken H

www.esXpress.com

Paul_B1
Hot Shot
Hot Shot
Jump to solution

Heh.. I know that's the most it can use, the question was why Smiley Happy

The hex max seems to make sense, so is this the same limit across all OS's?

0 Kudos
JonT
Enthusiast
Enthusiast
Jump to solution

Yes, it should be true for all Operating Systems.

0 Kudos
Paul_B1
Hot Shot
Hot Shot
Jump to solution

So with this logic, no Storage system can assign a LUN ID of greater than 256? This all comes about from the fact that I constantly tell my storage guys to make sure the LUN ID is under 256 and they always ask why and what would happen if it wasn't. But if the Symetrix will NEVER assign an ID to a LUN over 256, then this shouldn't be a problem, and I'll stop reminding them everytime Smiley Happy

0 Kudos
kharbin
Commander
Commander
Jump to solution

The SANs LUN ID and the vmkernels internal LUN ID are two different things and should not be confused. At the kernel level each LUN ID is a pointer to a physical LUN, which on the SAN side can have a different ID. But the end result is you have a maximum of 256 internal pointers to the LUNs.

This is true for most every OS on the planet. Its why Qlogic cards, when you eneter the BIOS and scan the LUNs, only show max of 256. This is just something that is at the core of computing, nothing VMware specific.

0 Kudos
Paul_B1
Hot Shot
Hot Shot
Jump to solution

Ahhhh.. ok it's starting to make more sense to me.

So if the SAN guys assign a new LUN to a host and on their end its ID is something like 342, I should still be able to see it as long as I don't have more than 256 LUNS already assigned to the Host?

When I look at my LUNS in VC for a host that has 4 Total LUNS assigned and I see:

vmhba1:0:233:1

I don't need to worry about the "233" being over 256? It can be any number as long as I don't have 256 LUNS assigned?

Sorry for the game of 20 questions, I'm just not a storage guy and have never really understood this. Trying to understand how the SAN works with VMWare when I don't get to actually touch and play with it is difficult for me Smiley Happy

0 Kudos
BUGCHK
Commander
Commander
Jump to solution

Probably because from the early days of SCSI right till todays HBAs,

LUN ID has always been a 2 byte hex value. 00-ff, for a total of 256.

In early SCSI standards using a 6-byte CDB, the LUN address was limited to 3 bits giving adresses 0..7. Which is not to be confused with the max. Target address in narrow SCSI. Cheap devices did not deal with LUN addresses > 0 correctly; a common symptom was to see 'ghost devices'.

Then it was limited to 0..31.

Fibre Channel FCP allows different 'addressing modes'.

This is encoded in a 16-bit field where 2 bits define the 'addressing mode'.

Most devices use "peripherial device addressing" which allows for 6-bit bus address, 8-bit target address and 8-bit LUN address. As far as I can tell, the bus/target fields are not used by the drives which leaves you with 256 LUN[/b] addresses (0..255).

The HP-UX operating system uses "volume set addressing" (VSA), which provides a 14-bit LUN address space (16384 LUNs). If you know HP-UX, you see that this is then mapped into its silly SCSI-2 layer which breaks everything up into controller/target/lun addresses. And that[/u] LUN address is limited to 0..7 again.

But I digress...

gogogo5
Hot Shot
Hot Shot
Jump to solution

I don't need to worry about the "233" being over 256?

It can be any number as long as I don't have 256

LUNS assigned?

For you to see vmhba1:0:233:1 would indicate that LUNS 1-232 would have been added to the ESX host. We use an EMC SAN and having added 6 new LUNs whose LUN IDs on the EMC side were 116, 117, 118 etc etc but when we scanned then added the new LUNs at the ESX side they were reported as LUNS 3-8 (since we already had LUNs 1 and 2).

0 Kudos
BUGCHK
Commander
Commander
Jump to solution

For you to see vmhba1:0:233:1 would indicate that LUNS 1-232 would have been added to the ESX host.

That depends on the storage array. On a HP EVA I can map a single virtual disk to any possible LUN address (1..255) and leave a gap. I can also map multiple vdisks and leave even more gaps - e.g. 2, 7, 139.

No, I don't claim that it makes sense Smiley Wink

0 Kudos
gogogo5
Hot Shot
Hot Shot
Jump to solution

Interesting, so one would need to take into account the storage array used too.

To clarify, when you use a HP EVA can you assign any LUN ID but only IDs between 1-256 or can you assign any LUN ID so long as the number of "instances" of LUN IDs is <=256 ?

0 Kudos
whynotq
Commander
Commander
Jump to solution

you can perform the same "host to LUN" mapping within EMC storage too.

it has a strong use in unix environments where admins want to have devices in a specific order, say you have 10 hosts all with 6 SAN LUNs if you presented them sequentially then the last host could in theory end up with a device ID of say "c1t0d60" but if you want all the hosts to be the same you can say wothin the storage group on a Clariion for example that LUN 60 is going to be presented to the host as LUN5, the default way Clariions work is first LUN = 0 (regardless of ID).

so in our example we could have 10 hosts connected to the SAN each seeing 6 LUNs and each system having the same device ID string:

c1t0d0, c1t0d1, c1t0d2, c1t0d3, c1t0d4,c1t0d5

this gives admins control for scripting and tracking purposes. But you don't have to do it like this.

0 Kudos
BUGCHK
Commander
Commander
Jump to solution

gogogo5:

The EVA uses "virtual disks" (the internal storage container which is unfortunately named "LUN" on some arrays). They have a unique 128-bit internal UUID (Universal Unique IDentifier) and a "name" which the user deals with.

The "name" is mapped to a LUN address between 1 and 255, but each host has its own individual LUN address space. This can lead to errors, because you can map a virtual disk on different LUN addresses for different hosts.

This flexibility is required, because the EVA can deal with up to 1024 virtual disks, which obviously do not fit in a 255 LUNs address space. The user interface does provide some mechanisms (e.g. folders) to make sure a virtual disks is mapped to the same LUN address on all hosts presented.

LUN Address 0 is occupied by a special controller LUN. It is used for in-band management via Fibre Channel and SCSI-3 "REPORT LUN" lookups. You sometimes see this when Linux complains about a 'type 12' SCSI device.

Mar 30 09:33:06 esxc kernel: scsi: unknown type 12

Mar 30 09:33:06 esxc kernel: resize_dma_pool: unknown device type 12

whynotq:

Thanks for the info.

0 Kudos
gogogo5
Hot Shot
Hot Shot
Jump to solution

thanks for the detail.

0 Kudos
titaniumlegs
Enthusiast
Enthusiast
Jump to solution

342 won't work as a LUN ID. Max # LUNs is 256, and the max ID is 255 (since they start at 0). Please see http://www.vmware.com/pdf/vi3_301_201_config_max.pdf at the bottom of page 2.

On a NetApp, I provisioned a couple LUNs at high IDs

homer> lun show -m

LUN path Mapped to LUN ID Protocol

\----


/vol/clone1/vmfstest2 vm-qla 233 iSCSI

/vol/clone2/vmfstest2 vm-qla 257 iSCSI

/vol/vmfstest/vmfstest vm-qla 0 iSCSI

/vol/vmfstest2/vmfstest2 esx-dr-iscsi 1 iSCSI

vm-qla 1 iSCSI[/code]

Then rescan...

Here's some CLI output, since I can't quite see how to stick a jpg in a post...

\[root@esx-dr vmfstest2]# esxcfg-mpath -l

Disk vmhba0:0:0 /dev/cciss/c0d0 (69459MB) has 1 paths and policy of Fixed

Local 0:1.0 vmhba0:0:0 On active preferred

Disk vmhba1:5:0 /dev/sda (81926MB) has 2 paths and policy of Fixed

iScsi 7:3.1 iqn.2000-04.com.qlogic:qla4052c.gs10629a15160.1<->iqn.1992-08.com.netapp:sn.101168561 vmhba1:5:0 On active preferred

iScsi 7:3.3 iqn.2000-04.com.qlogic:qla4052c.gs10629a15160.2<->iqn.1992-08.com.netapp:sn.101168561 vmhba2:2:0 On

Disk vmhba1:5:1 /dev/sdb (18432MB) has 3 paths and policy of Fixed

iScsi 7:3.1 iqn.2000-04.com.qlogic:qla4052c.gs10629a15160.1<->iqn.1992-08.com.netapp:sn.101168561 vmhba1:5:1 On active preferred

iScsi 7:3.3 iqn.2000-04.com.qlogic:qla4052c.gs10629a15160.2<->iqn.1992-08.com.netapp:sn.101168561 vmhba2:2:1 On

iScsi sw iqn.1998-01.com.vmware:esx-dr-746e08c3<->iqn.1992-08.com.netapp:sn.101168561 vmhba40:0:1 On

Disk vmhba1:5:233 /dev/sdc (18432MB) has 1 paths and policy of Fixed

iScsi 7:3.1 iqn.2000-04.com.qlogic:qla4052c.gs10629a15160.1<->iqn.1992-08.com.netapp:sn.101168561 vmhba1:5:233 On active preferred[/code]

Note the LUN with ID 233 shows up but 257 does not.

If you want to "play with SAN" and you don't have one, see if your friendly neighborhood NetApp SE will hook you up with the NetApp Simulator. You can't make very big LUNs, or get much performance, or even do FCP SAN, on a sim but it does iSCSI very nicely, and you can play with this just like I did.

Share and enjoy!

Message was edited by:

titaniumlegs

Message was edited by:

titaniumlegs

Share and enjoy! Peter If this helped you, please award points! Or beer. Or jump tickets.
0 Kudos