i have two host's and one common storage ..i need serial number of there..i got the serial numbers of two hosts..
is there any way to get serial number or hardware details of my SAN storage.
Hi,
log into the SANs webinterface. Most vendors show the serial number there.
Regards
Not sure how to all the server details but try the following to get some useful info.
1. esxcli storage core path list to get detailed information regarding the paths.
2. esxcli storage core path list -d <naaID> to list the detailed information of the corresponding paths for a specific device.
3. esxcli storage nmp device list lists of LUN multipathing information.
`dGeorgey
Adding to what dGeorgey said,the following will also help
lspci -vvv
esxcfg-scsidevs -l
esxcli storage core device list
-SatyS
i don't know which IP is assign to the SAN ..CAN GET IT FROM VCENTER SERVER
Select the host;
Configuration tab > Hardware Section : Storage Adapters > Select the storage adpator
Right Click > Properties > Network Configuration Tab..
See the attached image (iSCSI)..
~dGeorgey
Log into ESXi, ssh or console.
Run ‘ls -l /’ to get the UUID’s of the bootbank and altbootbank:
~ # ls -l /
lrwxrwxrwx 1 root root 49 Oct 27 17:55 altbootbank -> /vmfs/volumes/bebbef72-6cbc41fa-b169-68d3824c6d51
drwxr-xr-x 1 root root 512 Sep 17 01:11 bin
lrwxrwxrwx 1 root root 49 Oct 27 17:55 bootbank -> /vmfs/volumes/94671c74-55d3efd8-6f90-332c181fc3cf
Use ‘vmkfstools -P filesystem_path’ to get the disk id
~ # vmkfstools -P /vmfs/volumes/bebbef72-6cbc41fa-b169-68d3824c6d51
vfat-0.04 file system spanning 1 partitions.
File system label (if any):
Mode: private
Capacity 261853184 (63929 file blocks * 4096), 114647040 (27990 blocks) avail
UUID: bebbef72-6cbc41fa-b169-68d3824c6d51
Partitions spanned (on “disks”):naa.xxxxxxxxxxxxxxxxxxxxxxx:6
To check the device properties:
ESXi 4.x
~ # esxcli nmp device list -d <naa.xxxxxxxxxxxxxxxxxxxxxxx> naa.xxxxxxxxxxxxxxxxxxxxxxx Device Display Name: HITACHI Fibre Channel Disk (naa.60060e80047e360000007e3600000261)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
Working Paths: vmhba0:C0:T3:L1, vmhba1:C0:T3:L1
~ # esxcli corestorage device list -d <naa.xxxxxxxxxxxxxxxxxxxxxxx> naa.xxxxxxxxxxxxxxxxxxxxxxx
Display Name: HITACHI Fibre Channel Disk (naa.xxxxxxxxxxxxxxxxxxxxxxx)
Size: 6144
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.xxxxxxxxxxxxxxxxxxxxxxx
Vendor: HITACHI
Model: OPEN-V
Revision: 5009
SCSI Level: 2
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
For ESXi 5, use the following commands
~ # esxcli storage nmp device list |grep -A8 ^naa.xxxxxxxxxxxxxxxxxxxxxxx naa.xxxxxxxxxxxxxxxxxxxxxxx
Device Display Name: NETAPP Fibre Channel Disk (naa.xxxxxxxxxxxxxxxxxxxxxxx) Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config:{preferred=vmhba0:C0:T1:L0;current=vmhba0:C0:T0:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba0:C0:T3:L0
~ # esxcli storage core device list |grep -A23 ^naa.xxxxxxxxxxxxxxxxxxxxxxx naa.xxxxxxxxxxxxxxxxxxxxxxx
Display Name: NETAPP Fibre Channel Disk (naa.xxxxxxxxxxxxxxxxxxxxxxx)
Has Settable Display Name: true
Size: 30720
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.xxxxxxxxxxxxxxxxxxxxxxx
Vendor: NETAPP
Model: LUN
Revision: 0.2
SCSI Level: 4
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unknown
Other UIDs: vml.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx