Newly created array not appearing in ESXi
HP DL380 G5
ESXi 5.5.0
Smart Array P400
I've created the array using HP's command line utility via SSH using the command on 2x300 GB drives
hpssacli ctrl slot=1 create type=ld drives=1I:1:7,1I:1:8 size=300 raid=1
Great - lights are on and all is good - do a quick status check ( note the new array is array C )
array A
physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 72 GB, OK)
physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 72 GB, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 72 GB, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 72 GB, OK)
array B
physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 36 GB, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 36 GB, OK)
array C
physicaldrive 1I:1:7 (port 1I:box 1:bay 7, SAS, 300 GB, OK)
physicaldrive 1I:1:8 (port 1I:box 1:bay 8, SAS, 300 GB, OK)
Problem I have now is that it's not appearing in the storage, even when I re-fresh it or try to add additional storage. Is there something else I need to do to the newly created array ? On a quick google I can't see anything. I'm wondering if it's because I created the mirror with the actual size of the drives however if this was going to be a problem I'm sure it would have bombed when I created it.
The issue is likely that the partitions exist at all. Unless you need these NTFS partitions, take a look at http://kb.vmware.com/kb/1008886 (section "Clearing partitioning information in ESXi using the DD utility") to see whether this solves the issue.
André
Hi,
Can you verify that the Smart Array P400 card is showing up under Storage adapters?
Click the Host -> Configuration -> Storage adapters.
I can confirm that - see attached screen shot.
The HOST is already running 2 VMs
Do you know if the RAID has finished initializing?
I would like to think so as it was yesterday ( over 24 hours ) since I created it. I've not manually initialized it as it was all done via the HP Smart Array CLI commands via SSH. I would have thought after creating the array it would have initialized automatically ? On checking I don't see any commands to do this via the HP Smart Array CLI - Array C is the new one I created.
~ # /opt/hp/hpssacli/bin/hpssacli ctrl all show config
Smart Array P400 in Slot 1 (sn: PAFGK0L9VWC0NT)
Internal Drive Cage at Port 1I, Box 1, OK
Internal Drive Cage at Port 2I, Box 1, OK
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (205.0 GB, RAID 5, OK)
physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 72 GB, OK)
physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 72 GB, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 72 GB, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 72 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (33.9 GB, RAID 1, OK)
physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 36 GB, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 36 GB, OK)
array C (SAS, Unused Space: 571543 MB)
logicaldrive 3 (298 MB, RAID 1, OK)
physicaldrive 1I:1:7 (port 1I:box 1:bay 7, SAS, 300 GB, OK)
physicaldrive 1I:1:8 (port 1I:box 1:bay 8, SAS, 300 GB, OK)
Can you list out the configuration/arrays and confirm it was created? Try SSH into your ESX host (Configuration -> Security Profile -> Turn on SSH under services) and login using root/local root password.. and type the following :
esxcli storage core device list -- You should be able to see both arrays if this is working properly.
The other alternative might be to destroy/recreate it. After that, try Add Storage -> Disk/LUN -> See if it shows up.
This is what I get on that - note I do have two VMs running off this HOST already.
~ # esxcli storage core device list
mpx.vmhba1:C0:T1:L0
Display Name: Local VMware Disk (mpx.vmhba1:C0:T1:L0)
Has Settable Display Name: false
Size: 34699
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/mpx.vmhba1:C0:T1:L0
Vendor: VMware
Model: Block device
Revision: 1.0
SCSI Level: 2
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Other UIDs: vml.0000000000766d686261313a313a30
Is Local SAS Device: false
Is Boot USB Device: false
No of outstanding IOs with competing worlds: 32
mpx.vmhba1:C0:T0:L0
Display Name: Local VMware Disk (mpx.vmhba1:C0:T0:L0)
Has Settable Display Name: false
Size: 209924
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/mpx.vmhba1:C0:T0:L0
Vendor: VMware
Model: Block device
Revision: 1.0
SCSI Level: 2
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Other UIDs: vml.0000000000766d686261313a303a30
Is Local SAS Device: false
Is Boot USB Device: false
No of outstanding IOs with competing worlds: 32
mpx.vmhba32:C0:T0:L0
Display Name: Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)
Has Settable Display Name: false
Size: 7512
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0
Vendor: Lexar
Model: JumpDrive
Revision: 1100
SCSI Level: 2
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: true
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Other UIDs: vml.0000000000766d68626133323a303a30
Is Local SAS Device: false
Is Boot USB Device: true
No of outstanding IOs with competing worlds: 32
mpx.vmhba0:C0:T0:L0
Display Name: Local TEAC CD-ROM (mpx.vmhba0:C0:T0:L0)
Has Settable Display Name: false
Size: 0
Device Type: CD-ROM
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/cdrom/mpx.vmhba0:C0:T0:L0
Vendor: TEAC
Model: DV-W28E-RW
Revision: G.B2
SCSI Level: 5
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: true
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Other UIDs: vml.0005000000766d686261303a303a30
Is Local SAS Device: false
Is Boot USB Device: false
No of outstanding IOs with competing worlds: 32
~ #
mattx27 wrote:
I would like to think so as it was yesterday ( over 24 hours ) since I created it. I've not manually initialized it as it was all done via the HP Smart Array CLI commands via SSH. I would have thought after creating the array it would have initialized automatically ? On checking I don't see any commands to do this via the HP Smart Array CLI - Array C is the new one I created.
~ # /opt/hp/hpssacli/bin/hpssacli ctrl all show config
Smart Array P400 in Slot 1 (sn: PAFGK0L9VWC0NT)
Internal Drive Cage at Port 1I, Box 1, OK
Internal Drive Cage at Port 2I, Box 1, OK
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (205.0 GB, RAID 5, OK)
physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 72 GB, OK)
physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 72 GB, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 72 GB, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 72 GB, OK)
array B (SAS, Unused Space: 0 MB)
logicaldrive 2 (33.9 GB, RAID 1, OK)
physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 36 GB, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 36 GB, OK)
array C (SAS, Unused Space: 571543 MB)
logicaldrive 3 (298 MB, RAID 1, OK)
physicaldrive 1I:1:7 (port 1I:box 1:bay 7, SAS, 300 GB, OK)
physicaldrive 1I:1:8 (port 1I:box 1:bay 8, SAS, 300 GB, OK)
Hmm.. Wait a sec.. Look at Array C..
SAS unused space is 57153MB .. That wouldn't be correct if you configured the two drives as RAID 1.
Also, have a look at logical drive 3 - 298 MB?? That should be like 279GB. Can you try destroying and recreating this logical disk ? It doesn't match the other logical drives.
I've destroyed the logical drive and re-created it - now showing:
array C (SAS, Unused Space: 0 MB)
logicaldrive 3 (279.4 GB, RAID 1, OK)
physicaldrive 1I:1:7 (port 1I:box 1:bay 7, SAS, 300 GB, OK)
physicaldrive 1I:1:8 (port 1I:box 1:bay 8, SAS, 300 GB, OK)
But it's not showing up, even when I go to Add Storage --> Disk/LUN
This is really really odd.
Try performing a rescan all? (Storage - > Rescan all)
Done that, still the same. 😞
Unfortunately, not the best answer but I think the only other option is to try restarting the host and seeing if that has any affect. Probably not doable which I understand..
Can you try running this again :
esxcli storage core device list
lets see what we get..
Already did that and it only still sees the first two logical drives and not the newly created one :smileycry:
I agree - I think only a reboot now to see if this kicks in after that - I'll arrange some downtime over the weekend.
Many thanks for your help so far - the esxcli storage core device list is handy to know for future reference + my SSH skills for removing and creating arrays via the HP command line utility have improved.
Cool. Let us know how it goes.
Also, check this out for more commands on HP Command line - HP Smart Array CLI commands on ESXi | Kalle's playground
Mark an answer helpful if you may .
That's the very page I was getting my commands from.The only thing missing was the 'force' when removing the logical drive.
Now i need to find out how to stop getting alerts from other threads I have not subscribed to even though I've tuned off everything in my profile !!
The old P400 array controller has a couple of bugs.. These are not necessarily pertinent to the issue you seeing but I'll mention it..
Ensure your P400 firmware is at 7.22. If you use a higher version (7.23 or 7.24) of firmware you will not be able to set the boot volume from within the array firmware (important if you ever rebuild) There is however a configuration tool available from HP, it's a boot style environment..
I've personally never had much success with the CLI Array configuration interface..
I know this is not what you want to hear but a reboot of the host, go into the array bios "F8" and view your logical volume.. maybe even delete array "C" and recreate it. !
It will be quick and quite definitive. You'll know for sure your raid and logical volume are setup.
Just my 2 bob worth
On checking the level of firmware ( if I'm looking at the correct place ) - it seems the level is way beyond what you suggest it should be - 5.20
I'll see if I can get hold of the version you suggest - just hope I can update it in one hit without having to go through ones before it.....
Just in case that's interesting for you, you may take a look at VMware Front Experience: How to run the HP Online ACU CLI for Linux in ESXi 4.x where the author did some tests and mentioned "However, ESXi would not pick up the changed disk size, so I was not able to grow the VMFS volume without rebooting the host.".
André
Hey Matt
I'd suggest getting the server updated firmware wise..P400 firmware version 5.20 is very old..
If I recall firmware DVD version 9.3 will update everything to a supported level which complies with ESXi 5u3 "HP Release or VMWare release ". (although I'd suggest updating the ILO firmware to 2.25 after running the firmware DVD)
Burn the ISO to a DVD. (or mount the ISO from ILO)
Put it into the DVD drive on the server and reboot it..
You can then either select what is updated or let it install all firmware updates.. either way you should install all suggested updates as they are tested to be interoperable..
If your booting from the ISO via ILO don't update the ILO bios until afterwards..
Thanks - I'm currently working out of an Office in the US so won't be back in the UK until next week, I'll do a quick search for the firmware DVD and will start to download it so at least it's then ready.