pskcentrify
Contributor
Contributor

OpenSUSE can't get IP address from vCloud Director network pool...

I have an OpenSUSE VM using DHCP running OK on Fusion. I then use ovftool to convert it to vAPP and uploaded to vCloud Director successful too. I specify the vAPP to use "static IP Pool" address from a Network Group and seeing an valid IP address assigned to it. After the VM is running, ifconfig shows there is NO IP address assigned to the NIC and ifstatus eth1 shows dhclient still waits for data! What did I do wrong here and how can I fix this? Thanks for your help in advance.

PSK

0 Kudos
9 Replies
admin
Immortal
Immortal

Using static IP pools does not expose the IP as a DHCP address to the guest. You'll need to enable guest customization to have vCD automatically configure the IP address or manually configure it once the guest starts.

You may also want to confirm that YaST shows the correct MAC address for the interface in its configuration. I've seen cases with vSphere, etc. where the MAC address of the NIC changes and because SLES and others use the MAC to identify the device, it dynamically creates a new Ethernet device (e.g. eth1) and uses DHCP instead of the existing static configuration for the original device (e.g. eth0).

0 Kudos
pskcentrify
Contributor
Contributor

Hi Kyle,

Thanks for the quick response. Here's what I have and know so far:

1) Gust OS customization is enabled;

2) From the Virtual Machines page, it shows:

Networks - NIC 0: QA Testing

IP Address: 172.27.15.204

External IP: -

Connectivity Type: Direct

3) The VM is on, from it's Property page, it shows:

NIC# : 0

Connected : Check marked

Network : QA Testing

Primary NIC : Radio button selected

IP Mode : Static - IP Pool

IP Address : 172.27.15.204

MAC Address : 00:50:56:01:00:09

4) From the VM Terminal windows:

Use ifconfig -a command shows:

eth4 Link encap:Ethernet HWaddr 00:50:56:01:00:09

The MAC address matches what Cloud Director generates, ;-(.

5) Use ifstatus eth4 command shows:

eth4 device: Advanced Micro Device 79c970 (rev 10)

No configuration found for eth4

How can I fix this? Thanks again for your help.

PSK

0 Kudos
admin
Immortal
Immortal

If you just want a working VM, run yast to configure the network device with a static IP. For the larger question of having OpenSUSE guests correctly customized, I can't say.

If the VM was originally configured for DHCP it may be that guest customization will not change DHCP to static, instead only updating the IP for an existing static configuration.

0 Kudos
pskcentrify
Contributor
Contributor

Hi Kyle,

We have been doing more network testing on this for the last few days. Making some more progress and hitting more issues also.

Now by using “static IP Pool” and manually setting the IP address of our VMs after it’s running to match the static IP Pool assignment, we can have our VMs seeing and talking to each other now in the same vDS network group. BUT all of them can’t ping the default gateway using IP address of 172.27.0.1. Because of that, they all can’t connect to external networks, ;-(.

In our vDS setting:

1) The Cloud Director Host is an EXSi Host using 172.27.19.225 and there is ONLY one NIC defined for our vDS.

2) The vAPP and VMs are running on another ESXi Host using 172.27.15.201.

3) All the VMs using this network set up: 172.27.15.202 to 172.27.15.220 range (Our vDS Network Group defined network range), netmask 255.255.240.0, default gateway: 172.27.0.1

4) None of the VMs can ping 172.27.0.1.

5) None of the VMs can ping the ESXi Host, 172.27.15.201 they are running on too.

How can we fix this to move on? Is this the right forum to post this question? Thanks again for your great help.

0 Kudos
pskcentrify
Contributor
Contributor

Hi Kyle,

We have been doing more network testing on this for the last few days. Making some more progress and hitting more issues also.

Now by using “static IP Pool” and manually setting the IP address of our VMs after it’s running to match the static IP Pool assignment, we can have our VMs seeing and talking to each other now in the same vDS network group. BUT all of them can’t ping the default gateway using IP address of 172.27.0.1. Because of that, they all can’t connect to external networks, ;-(.

In our vDS setting:

1) The Cloud Director Host is an EXSi Host using 172.27.19.225 and there is ONLY one NIC defined for our vDS.

2) The vAPP and VMs are running on another ESXi Host using 172.27.15.201.

3) All the VMs using this network set up: 172.27.15.202 to 172.27.15.220 range (Our vDS Network Group defined network range), netmask 255.255.240.0, default gateway: 172.27.0.1

4) None of the VMs can ping 172.27.0.1.

5) None of the VMs can ping the ESXi Host, 172.27.15.201 they are running on too.

How can we fix this to move on? Is this the right forum to post this question? Thanks again for your great help.

PSK

0 Kudos
pskcentrify
Contributor
Contributor

Sorry for the double posting from a newbie, ;-(.

0 Kudos
pskcentrify
Contributor
Contributor

Hi Kyle,

Any more ideas and suggestions for us to try here? Thanks for the help.

PSK

0 Kudos
admin
Immortal
Immortal

Sorry, I don't have any other ideas. If you've got support for vCD I recommend filing an SR.

0 Kudos
pskcentrify
Contributor
Contributor

Hi All,

I solved this and vDS works! It's because, when specifying the external network for the organizational DC, we accidentally selected the vSS instead the newly created vDS, ;-(. Once we fix this and everything works!. Thanks for all the help here. Cheers.

PSK

0 Kudos