VMware Cloud Community
MaldonadoW
Contributor
Contributor
Jump to solution

ESXi 5.1 - ScsiDeviceIO: 6356 Could not detect setting for sitpua for device naa.xxxxx

So I Googled this and it seems there is very little information out there, but I am hoping someone out there might have the answer because most people just say "check the HCL list" or "update your drivers", etc but these are pre-packaged responses. I am hoping someone else has seen this problem and actually resolved it. :smileycool:

I am basically running ESXi 5.1 on an HP Proliant DL380 G6 and when I booted ESXi just hung on "Deltadisk loaded successfully".

Then I hit Alt-F12 and I can see an error stating:

  • ScsiDeviceIO: 6356 Could not detect setting for sitpua for device naa.XXXXX

Basically a long number sequence at the end, but this machine worked with 5.0 and now I am seeing this after I upgraded to 5.1.

0 Kudos
1 Solution

Accepted Solutions
DonChino
Enthusiast
Enthusiast
Jump to solution

Buddy, you are in luck but I actually had the EXACT same problem on my HP Proliant DL380 G6 a few weeks ago and I also Googled around but no one actually had a solution, so I did the HARD work for you.

  • I ran the ESXCLI command and tried updating HP Smart Array controller driver in ESXi 5.1
    • Why? Because some "experts" online state that it is possible that you are not using the HP ESXi image or your install might be missing drivers but guess what? This does not work, because the drivers provided by HP are for ESXi 5.0 and when you try this it basically tells you NOTHING CHANGED because it will not install 5.0 drivers onto 5.1.
    • Also you cannot use vihostupdate.pl since ESXi5.1 does not support it, so you will have to use SSH and do ESXCLI... Smiley Happy

  • Last but not least, people tell you to check the HCL list but this is an HP PROLIANT SERVER!!!!
    • Come on, VMware partners with HP and they are not going to support a G6?

Anyway, so NONE of the above worked, but I read an interesting footnote here:

http://cormachogan.com/2012/12/17/could-not-detect-setting-of-sitpua-for-device-naa-xxx-error-not-su...

Basically, this "expert" does not give you any solution, but the bit about "thin provisioning" rang a bell in my head since I always thin provision my VMs and I traced back everything I did before I got this error.

  • I had run firmware updates
  • I installed a BBWC card with battery
  • I lowered a RAID 10 with 8 disks to 6 disks
  • I popped in 4 old SAS Disks from 2009 while removing the newer 2012 models so only 2 from the original 8

Logically you would suspect the firmware updates and maybe consider rolling them back, but my hunch was that it was the disks. How so? Well, it turns out that some of those old disks actually had some OLD RAID information on them. It is possible they were removed from an exisitng RAID and even though you use the ACU to remove any logical mapping and create a new mapping, it seems there is a VERY REMOTE chance where if you move a disk with some OLD RAID info that you will get this error.

It seems crazy but I proved it in my tests, so I basically created a RAID 10, rebooted, no ESXi, boot into ACU and removed RAID 10, reboot, go into ACU and create RAID 10, reboot, still no ESXi, reboot, boot into ACU and remove RAID 10, shutdown.

Now here is the trick. I moved the disks around, just try any random sorting, but if you are lucky - REBOOT.

I had removed the RAID 10 but guess what? Now I see a RAID 5. WTF?! :smileydevil:

Yep, it seems I had moved my disks around and I triggered a logical RAID 5 setup probably from some old setup, but that error I got made no sense and I wasted 2 days doing all these other updates only to find that it was a tricky disk? Who knows why that errant disk was impacting ESXi but now I wiped the RAID 5 in ACU and make a note that I am not putting any new disks, but rather sticking to the same 6 disks.

reboot, boot into ACU and create RAID 10, shutdown, Pop in ESXi Flash Drive, reboot.

Viola!!! ESXi 5.1 is humming along. Pretty stupid and your mileage might vary (i.e. you might have to move things around more than once) because I got lucky that in my first disk swap I found that damn RAID 5 setup but there might be a better way with you just making sure you wipe your disks before you put them in storage, hahahaha...

Remember, format/blank/reset your disks outside the ACU. So hope you enjoyed my story and good luck, but hopefully this will help someone else and now let the Google spiders index this baby... :smileycool:

View solution in original post

0 Kudos
2 Replies
DonChino
Enthusiast
Enthusiast
Jump to solution

Buddy, you are in luck but I actually had the EXACT same problem on my HP Proliant DL380 G6 a few weeks ago and I also Googled around but no one actually had a solution, so I did the HARD work for you.

  • I ran the ESXCLI command and tried updating HP Smart Array controller driver in ESXi 5.1
    • Why? Because some "experts" online state that it is possible that you are not using the HP ESXi image or your install might be missing drivers but guess what? This does not work, because the drivers provided by HP are for ESXi 5.0 and when you try this it basically tells you NOTHING CHANGED because it will not install 5.0 drivers onto 5.1.
    • Also you cannot use vihostupdate.pl since ESXi5.1 does not support it, so you will have to use SSH and do ESXCLI... Smiley Happy

  • Last but not least, people tell you to check the HCL list but this is an HP PROLIANT SERVER!!!!
    • Come on, VMware partners with HP and they are not going to support a G6?

Anyway, so NONE of the above worked, but I read an interesting footnote here:

http://cormachogan.com/2012/12/17/could-not-detect-setting-of-sitpua-for-device-naa-xxx-error-not-su...

Basically, this "expert" does not give you any solution, but the bit about "thin provisioning" rang a bell in my head since I always thin provision my VMs and I traced back everything I did before I got this error.

  • I had run firmware updates
  • I installed a BBWC card with battery
  • I lowered a RAID 10 with 8 disks to 6 disks
  • I popped in 4 old SAS Disks from 2009 while removing the newer 2012 models so only 2 from the original 8

Logically you would suspect the firmware updates and maybe consider rolling them back, but my hunch was that it was the disks. How so? Well, it turns out that some of those old disks actually had some OLD RAID information on them. It is possible they were removed from an exisitng RAID and even though you use the ACU to remove any logical mapping and create a new mapping, it seems there is a VERY REMOTE chance where if you move a disk with some OLD RAID info that you will get this error.

It seems crazy but I proved it in my tests, so I basically created a RAID 10, rebooted, no ESXi, boot into ACU and removed RAID 10, reboot, go into ACU and create RAID 10, reboot, still no ESXi, reboot, boot into ACU and remove RAID 10, shutdown.

Now here is the trick. I moved the disks around, just try any random sorting, but if you are lucky - REBOOT.

I had removed the RAID 10 but guess what? Now I see a RAID 5. WTF?! :smileydevil:

Yep, it seems I had moved my disks around and I triggered a logical RAID 5 setup probably from some old setup, but that error I got made no sense and I wasted 2 days doing all these other updates only to find that it was a tricky disk? Who knows why that errant disk was impacting ESXi but now I wiped the RAID 5 in ACU and make a note that I am not putting any new disks, but rather sticking to the same 6 disks.

reboot, boot into ACU and create RAID 10, shutdown, Pop in ESXi Flash Drive, reboot.

Viola!!! ESXi 5.1 is humming along. Pretty stupid and your mileage might vary (i.e. you might have to move things around more than once) because I got lucky that in my first disk swap I found that damn RAID 5 setup but there might be a better way with you just making sure you wipe your disks before you put them in storage, hahahaha...

Remember, format/blank/reset your disks outside the ACU. So hope you enjoyed my story and good luck, but hopefully this will help someone else and now let the Google spiders index this baby... :smileycool:

0 Kudos
MaldonadoW
Contributor
Contributor
Jump to solution

Thanks for the quick reply.

I did not do the exact same thing, but I had changed RAID configurations while setting up two machines and I had moved around some disks so this was applicable.

Yep, format/blank/delete those disks outside the ACU...

:smileylaugh:

0 Kudos