I have 2 ESX hosts clustered in a data center, I've already physicaly connected them to the FC switches, zoned, bound luns, created storage groups etc... the SAN side of the configuration should be good.
Now i connect to VI and... well now I'm lost as what I'm supose to do. (I havent had much experience with VMware)
additional info:
I created one lun (1.8TB) and put it in both storage groups (assuming that I'll be using the SAN with HA and DRS )
I currently only have 2 VM's running
ESX servers are both Dell 2950's w/32gb of ram 6 NIC's and one 2port HBA.
Hi, I'm guessing that you've allocated the LUN to the ESX host while its running, not assuming anything here, but have you done a storage adaptor rescan?
You will need to the get the Navisphere agent that coresponses the flare code on the storage. The storage will auto register itself upon installation.
You will need to install this to each host, scan for storage on the first host will populate both host.
Power path is what I use, I thought there was a way for VM to manage the SAN, so i could do stuff like, boot from SAN etc..
I was hoping I could make the storage avalible to the ESX cluster directly, not to the VM's themselfs.
is this not possible?
I believe PowerPath is for Windows, by installing the Naviagent you make the storage available to VMWare and you can then carve out the size of the disks for each VM as you create them, and migrate your locally stored VM. The VM's will reside on the LUNs you have created.
I have not looked at boot from disk.
Hi, I'm guessing that you've allocated the LUN to the ESX host while its running, not assuming anything here, but have you done a storage adaptor rescan?
I did a lot of screen shots along the way of setting up my system, (right or wrong, it appears to still be up and running):). My Linux is not very strong so I just documented what tech support was feeding me and trying to learn along the way. The session is a remote session and my remote connection is the VMADMIN account.
The naviagent has to have the specific name the install script is looking for in order to run correctly, other wise you have to step through the manual firewall config.
HTH
Navisphere Installation
Copy the the files to the host via WinSCP (will go into the /home/vmadmin folder)
The script (ESX_install.sh) needs the file in the directory to be named naviagentcli.noarch.rpm, the file will usually be named with the version number in it, just rename it as above.
Be sure to get the correct version of the file that corresponds with the level of flare code the SAN is at.
Put the file in a /opt/naviagent folder on the host
Run chmod 777 "filename" (to make all files executable)
login as: vmadmin
vmadmin@OOCESX1's password:
$ su - (su up to root privilege)
Password:
vmadmin
vmware
# mkdir naviagent (make temp install folders)
esxpress-3-1-9-esx-i386.rpm esXpress v31 readme.docx esxpressVBA-3.1-1.esx.i386.rpm
naviagent openmanage phd vmware
esxpress-3-1-9-esx-i386.rpm esXpress v31 readme.docx esxpressVBA-3.1-1.esx.i386.rpm
esXpress 3.19 NaviagentlinuxVMware_oocesx_072208 OM_5.4.0_ManNode_A01.tar.gz
# cd NaviagentlinuxVMware_oocesx_072208/
ESX_install.sh ESX_uninstall.sh naviagentcli.noarch.rpm
ESX_install.sh ESX_uninstall.sh naviagentcli.noarch.rpm
esXpress 3.19 NaviagentlinuxVMware_oocesx_072208 OM_5.4.0_ManNode_A01.tar.gz
# cd NaviagentlinuxVMware_oocesx_072208/
ESX_install.sh ESX_uninstall.sh naviagentcli.noarch.rpm
ESX_install.sh ESX_uninstall.sh naviagentcli.noarch.rpm
esXpress 3.19 NaviagentlinuxVMware_oocesx_072208 OM_5.4.0_ManNode_A01.tar.gz
# mv OM_5.4.0_ManNode_A01.tar.gz /opt/openmanage
esXpress 3.19 NaviagentlinuxVMware_oocesx_072208
naviagent openmanage phd vmware
ESX_install.sh ESX_uninstall.sh naviagentcli.noarch.rpm
total 13776
-rw-rr 1 root root 6046 Jul 23 15:01 ESX_install.sh
-rw-rr 1 root root 3982 Jul 23 15:01 ESX_uninstall.sh
-rw-rr 1 root root 14070700 Jul 23 15:01 naviagentcli.noarch.rpm
total 13776
-rwxrwxrwx 1 root root 6046 Jul 23 15:01 ESX_install.sh
-rw-rr 1 root root 3982 Jul 23 15:01 ESX_uninstall.sh
-rw-rr 1 root root 14070700 Jul 23 15:01 naviagentcli.noarch.rpm
total 13776
-rwxrwxrwx 1 root root 6046 Jul 23 15:01 ESX_install.sh
-rwxrwxrwx 1 root root 3982 Jul 23 15:01 ESX_uninstall.sh
-rw-rr 1 root root 14070700 Jul 23 15:01 naviagentcli.noarch.rpm
# chmod 777 naviagentcli.noarch.rpm
total 13776
-rwxrwxrwx 1 root root 6046 Jul 23 15:01 ESX_install.sh
-rwxrwxrwx 1 root root 3982 Jul 23 15:01 ESX_uninstall.sh
-rwxrwxrwx 1 root root 14070700 Jul 23 15:01 naviagentcli.noarch.rpm
# ./ESX_install.sh naviagentcli (command to run install)
##############################################
ESX_install.sh ver 1.1
The following ports need to be enabled for the software to operate properly
port -> 6389,tcp,in,NaviCLI
port -> 6389,tcp,out,NaviCLI
port -> 6390,tcp,in,NaviCLI
port -> 6391,tcp,in,NaviCLI
port -> 6392,tcp,in,NaviCLI
port -> 443,tcp,out,NaviCLI
port -> 2163,tcp,out,NaviCLI
Do you want to enable <yes/no>? yes
Enabling ports now!
Install operation complete!
##############################################
Run the script file (if it does not completely run the install, it will at least open the ports.)
To run the install manually run "rpm -ivh filename"
Run "service naviagent start" to start the service after the install.
I had to bounce each host at OOCESX in order for them to Register in Navisphere on the CX3-20.
Firewall
esxcfg-firewall -o 6389,tcp,in,naviagent (Manual firewall config)
query: esxcfg-firewall -q (Should setting)
mpath
esxcfg-mpath -l (Shows active paths to storage)
Disk vmhba2:0:0 /dev/sda (278784MB) has 1 paths and policy of Fixed
Local 12:14.0 vmhba2:0:0 On active preferred
Disk vmhba0:0:0 (0MB) has 4 paths and policy of Most Recently Used
FC 1:0.0 2100001b320a4126<->500601691020ec2c vmhba0:0:0 On active preferred
FC 1:0.0 2100001b320a4126<->500601601020ec2c vmhba0:1:0 On
FC 8:0.0 2100001b320a9326<->500601611020ec2c vmhba1:1:0 On
FC 8:0.0 2100001b320a9326<->500601681020ec2c vmhba1:0:0 On
navi service info
service naviagent start|stop|restart|status
checking startup
chkconfig --list|grep -i navi
Add the host to Navisphere
Install the naviagent software to the host before connecting the cables.
The host should automatically Register with Navisphere after the install / connection, just refresh the display in Navisphere.
The host has to be in a storage group before it will see the storage in VMWare.
Currently we are using one host per storage group and presenting each LUN to each storage group. Make sure to add (or present) each LUN in the same order for each host.
I had to re-boot the host before the host would register.
I believe PowerPath is for Windows, by installing the Naviagent you make the storage available to VMWare and you can then carve out the size of the disks for each VM as you create them, and migrate your locally stored VM. The VM's will reside on the LUNs you have created.
I have not looked at boot from disk.
-
Yeah, I mostly admin windows boxes, I'm a total linux/unix newbie (PowerPath is for windows)
so your saying I need a navi agent that works for linux, I should be able to handle that.
what you discribe is what I want to do (not to concerned about Booting from San, yet..)
-
Hi, I'm guessing that you've allocated the LUN to the ESX host while its running, not assuming anything here, but have you done a storage adaptor rescan?
-
Yes I did, and No I haven't, I'm new to Vmware, I normaly just administer the SAN and a few windows boxes, I'm not even sure where to go to scan for storage addapeters, I've gone through a bunch of menu's, tryiing to add hard drives to test VM's etc..
Storage adaptor rescan was the fix, I can't belive I missed that!
Thanks NTurnbull