VMware Cloud Community
dpkenn
Contributor
Contributor

vSphere scripted build using NFS - mount to nfs server failed (no route to host )

Anyone successfully built an ESX4.0 host using a ks.cfg from a NFS share and the ISO from VMWare?

Scenario:

-Using the ISO - (ESX-4.0.0-update01-208167.iso) mounted through the HP iLO (460G6 blade). I boot to the ESX bootstrap Installation screen with the options to install.

-Press F2 for boot options and select the 'ESX scripted Install using USB ks.cfg'

-I modify the config line - 'append initrd=initrd.img mem=512m ksdevice=vmnic0 ip=x.x.x.x netmask=255.255.255.x gateway=x.x.x.1 nameserver=x.x.x.x ks=nfs://x.x.x.x:/vol/build/ks.cfg quiet IPAPPEND 1'

-The ISO loads all the necessary drivers but fails to mount the NFS share and ks.cfg file. I can ping the filer and I can see correct COS settings are created during the build.

However, after looking into the esx-install.log (press F3 to access console) and the vmkernel settings, I discovered that there was no vmkernel port created to mount the NFS share during the build.

Suggestions anyone?

Thanks!

0 Kudos
11 Replies
emmar
Hot Shot
Hot Shot

Hi dpkenn,

Did you get anywhere with this? I'm comign across a similar issue.

Thanks

Emma

0 Kudos
admin
Immortal
Immortal

This may be because of a rare race condition.

The current workaround is to put the ks file into a deeper directory on the NFS server.

Try creating a deep directory /vol/build/a/b/c/d such that your kickstart location is ks=nfs://x.x.x.x:/vol/build/a/b/c/d/ks.cfg

Please let me know if that worked.

0 Kudos
dachrissi
Contributor
Contributor

Hi dpkenn,

the problem in your environment is, that the ksdevice differs from the device which is selected from the esx installer as "vmnic0".

The solution is:

append initrd=initrd.img mem=512M quiet ksdevice= mac of network device ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x nameserver=x.x.x.x ks=nfs:nfshostname:/srv4/ks/ks.cfg

The next Problem could be vlan tagging, in this case you must add the vlanid.

append initrd=initrd.img mem=512M quiet ksdevice= mac of network device ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x nameserver=x.x.x.x vlanid=x ks=nfs:nfshostname:/srv4/ks/ks.cfg

This should solve your problem.

Sorry for my englisch.

Christian Gartz

----------------------------------------- environment: 22x HP BL460c G1 (esx hosts) 40x HP BL680c G5 (esx hosts)
0 Kudos
emmar
Hot Shot
Hot Shot

Hi All,

I have totally confised myself now!!!

We have the KS files on an NFS share. I'm using iLO to mount the ESX4 ISO to the server... when I kick off the install i press F2 and use the following option:

append initrd=initrd.img mem=512m ksdevice=vmnic1 ip=10.99.8.10 netmask=255.255.255.0 gateway=10.99.8.254 ks=nfs://10.100.5.196:/var/kickstart/esx/LUCvSphere/naluctmesx005.cfg

Note: the IP details i use here is a temporary build IP (there is no DHCP)

I understand now that ksdevice must be a MAC address as opposed to eth1 or vmnic1........... what happens if i dont specify a ksdevice? It will use the first NIC it finds....... but then what happens when the SC address in the ks.cfg gets applied to the NIC????? This was why we had to use eth1 in our VI3 build process..... is this not the same?

Thanks

Emma

0 Kudos
brinnan
Contributor
Contributor

I am having a similar problem. When I use alt+f3, I can see that vswif0 was configured correctly but I can not ping anything other than myself, not even the gateway. I have no problems kickstarting RHEL images.

I'm using DHCP and I have tried the IPAPPEND option as well as statically assigning the address. Neither work.

0 Kudos
dachrissi
Contributor
Contributor

Hi emmar, and brinnan,

the problem is, that the esx3.x installer (redhat installer) network adapter detection order is totally different from the vSphere 4 installer (proprietary vmware). This is fact one.

The other fact is, that at HP Blade Systems the first network port detected by bios differs to the vsphere installer first detected network port.

Example:

Server with 2 nics

nic1: mac xxxxxxxxxxxx, management network

nic2: mac yyyyyyyyyyyy, virtual machine network

boot server -> bios post -> boot from cd, the nic order at this time is nic1, nic2 -> vmware installer start -> configure network adapter to connect to nfs, the nic order at this time is nic2, nic1

your settings are

append initrd=initrd.img mem=512m ksdevice= nic1 ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x ks=nfs://y.y.y.y:/var/kickstart/esx/LUCvSphere/naluctmesx005.cfg

now the vmware installer configure the physikal nic2 with the network settings from the nic1, the result is no network connection

the solution is to take the mac adress to select the ksdevice

-


environment:

22x HP BL460c G1 (esx hosts)

40x HP BL680c G5 (esx hosts)

----------------------------------------- environment: 22x HP BL460c G1 (esx hosts) 40x HP BL680c G5 (esx hosts)
0 Kudos
dpkenn
Contributor
Contributor

Thanks everyone for the input.

I've tried many of these steps already but haven't tried the mac yet. However, one thing to note is that the syntax for using NFS based automated build has apparently changed in ESX4.0.

Here is an article from Yellow Bricks that talks about this issue as well.

http://www.yellow-bricks.com/2010/03/26/nfs-based-automated-installs-of-esx-4/?utm_source=feedburner...YellowBricks%28YellowBricks%29&utm_content=GoogleFeedfetcher

ks=nfs://x.x.x.x:/nfs/install/ks.cfg quiet

ESX4.0 - no longer needs the colon :

ks=nfs://192.168.1.10/nfs/install/ks.cfg

I've looked at the esx_install.log and verified that I see the same issue but I still have a problem though with connecting to the ks.cfg on the NFS share even though the server is on the network and can ping the nfs filer. At first I thought it was because there was no kernel port created during the install process for IP storage connection, only vswif0 for the console. Come to find out this is broke vSphere.

I opened a ticket with VMWare and they have indicated that this is a known issue with ESX4.0 update1a. There was one incident mentioned in the report where the issue was not reproducible using the GA (Base Build) version of ESX Server 4.0.

I will try the MAC scenario for the ksdevice Monday and see if that can be a workaround until VMWare puts out a fix.

0 Kudos
dachrissi
Contributor
Contributor

Thanks everyone for the input.

I've tried many of these steps already but haven't tried the mac yet. However, one thing to note is that the syntax for using NFS based automated build has apparently changed in ESX4.0.

Here is an article from Yellow Bricks that talks about this issue as well.

http://www.yellow-bricks.com/2010/03/26/nfs-based-automated-installs-of-esx-4/?utm_source=feedburner...YellowBricks%28YellowBricks%29&utm_content=GoogleFeedfetcher

ks=nfs://x.x.x.x:/nfs/install/ks.cfg quiet

ESX4.0 - no longer needs the colon :

ks=nfs://192.168.1.10/nfs/install/ks.cfg

I've checked this but i use definitly the version with the colon. And i allways install our ESX hosts with an fully unattended installation using linux dhcp, pxe (tftp) and nfs server since esx 3.x. And esx 4.0 update1 is also be able to install via this configuration.

Is it right, that you use an ip address to connect to the nfs server? Not the dns name?

If you use the dns name (fqdn) and the installation environment is unable to resolve the ip address the mount fail.

I've looked at the esx_install.log and verified that I see the same issue but I still have a problem though with connecting to the ks.cfg on the NFS share even though the server is on the network and can ping the nfs filer. At first I thought it was because there was no kernel port created during the install process for IP storage connection, only vswif0 for the console. Come to find out this is broke vSphere.

Realy strange, have you tried to mount the nfs share manualy if your ip connection is fine?

I opened a ticket with VMWare and they have indicated that this is a known issue with ESX4.0 update1a. There was one incident mentioned in the report where the issue was not reproducible using the GA (Base Build) version of ESX Server 4.0.

I will try the MAC scenario for the ksdevice Monday and see if that can be a workaround until VMWare puts out a fix.

Hm, the problem is, if your ip connection is working fine the mac scenario probably don't work.

-


environment:

22x HP BL460c G1 (esx hosts)

40x HP BL680c G5 (esx hosts)

----------------------------------------- environment: 22x HP BL460c G1 (esx hosts) 40x HP BL680c G5 (esx hosts)
brinnan
Contributor
Contributor

Putting the MAC in manually is not an optional solution for me, as we have over 100 servers with different MACs. Besides, it does not work for me anyway.

The NFS syntax is not it. I've tried both ways. It looks to me like it gets an IP from DHCP and assigns it to vswif0, but I still can not ping anything from alt-f4 console. I have tried static as well as the IPAPPEND option. I only have 1 physical NIC in this server. I have this problem on both ESX4.0 and ESX4.0u1.

Adding ksdevice does not make a difference to me since I am using DHCP. Here is my .cfg file:

<snip>

label ESX4.0

kernel ESX4.0/vmlinuz

append initrd=ESX4.0/initrd.img mem=512M ks=nfs://10.61.102.30/pxeinstall/ks/ESX4.0.cfg

IPAPPEND 1

</snip>

I do get an IP on vswif0, but ping doesn't work anywhere. I am successful with ESX3.5u6 and all flavors of RHEL with different .cfg files.

0 Kudos
dpkenn
Contributor
Contributor

I've abonded the scripted build using NFS and went with the FTP option. Almost the same cmds but now I have to maintain a FTP server.

Using the 4.0 update1a ISO, I select the USB option upon boot, press F2 and modify the cmd to the following:

'append initrd=initrd.img mem=512M ksdevice=vmnic0 ip=x.x.x.x netmask=255.255.255.x gateway=x.x.x.x vlanid=XX ks=ftp://x.x.x.x/build/ks.cfg quiet'

Note: We are using VLAN trunking/Tagging to all ports. Take out the vlanid part if your switchport is in 'access mode'.

This works prefectly.

VMWare will "inform" us of any updates to the resolution. It's in the hands of the developers now.

Cheers,

DK

0 Kudos
FishNiX
Contributor
Contributor

We are also using NFS to vend our ks.cfg's. FWIW, we have Dell R900s and Netapp.

Here is the command I use:

ks=nfs:111.111.111.2:/vol/VMtemplates/kickstart/esx40.cfg/vsprdesx/xxx-vsprdesx-01.its.xxx.internal.cfg ksdevice=vmnic5 ip=111.111.111.3 netmask=255.255.255.0 gateway=111.111.111.1

Here is the relevant par of our ks.cfg

0 Kudos