VMware Cloud Community
CalicoJack
Enthusiast
Enthusiast
Jump to solution

Bootstrap fails at unable to resolve IP address.

Unable to resolve the IP address 10.xxx.x.xx to the FQDN on node testpd-ComputeMaster-0. To deploy Hadoop 2.x the DNS server must provide forward and reverse FQDN/IP resolution.

If they are all using IP addresses, why does it fail?  All of the created guests ping. 

I don't really know how to have the names hooked into DNS during the build.

Any ideas?  Thank you.

Reply
0 Kudos
1 Solution

Accepted Solutions
jessehuvmw
Enthusiast
Enthusiast
Jump to solution

Hi Nancy,

If it is not able or not convinient to configure FQDN/IP resolution on DNS Server, here is a solution for your testing purpose.

1. Power on the hadoop-template VM under BDE vApp.

2. Add entries like "10.0.0.1 hadoop001.domain.com ... 10.0.0.XYZ hadoopXYZ.domain.com" into /etc/hosts. As a result, Hadoop will lookup FQDN from /etc/hosts instead of the DNS Server.

Note:. when modifying the hadoop-template, you must follow "Maintain a Customized Hadoop Template Virtual Machine" pubs.vmware.com...74B8C10F1C.html . After power-off the hadoop-template, remember to restart tomcat server on BDE Server.

-Jesse

Cheers, Jesse Hu

View solution in original post

Reply
0 Kudos
11 Replies
jessehuvmw
Enthusiast
Enthusiast
Jump to solution

Hi Jack,

To create an Apache Bigtop, Cloudera CDH4 and CDH5, Hortonworks HDP 2.x, or Pivotal PHD 1.1/2.x cluster, you must configure a valid DNS and FQDN for the cluster's HDFS and MapReduce VLAN. If the DNS server cannot provide valid forward and reverse FQDN/IP resolution, the cluster creation process might fail or the cluster is created but does not function.


Hadoop 2.x requries forward and reverse FQDN/IP resolution by the DNS server.


The forward and reverse FQDN/IP resolution must be configured in the DNS Server and DHCP Server (if DHCP is used).   Once all the IPs used in the VLAN (your VMs belong to) can be resolved to FQDN by the DNS Server, the cluster can be created successfully


-Jesse @BDE

Cheers, Jesse Hu
Reply
0 Kudos
CalicoJack
Enthusiast
Enthusiast
Jump to solution

Since the pieces of the cluster, the management server, the worker node, etc are created and assigned an ip from a pool of addresses,  how do we add in any automatic update to DNS?

We don't really have that in place yet, here.    They addresses have to be added in by hand.

Can't there be a bypass somewhere to allow to use just the IPs at least for testing?

Thanks -

Nancy.

Reply
0 Kudos
jessehuvmw
Enthusiast
Enthusiast
Jump to solution

Hi Nancy,

If it is not able or not convinient to configure FQDN/IP resolution on DNS Server, here is a solution for your testing purpose.

1. Power on the hadoop-template VM under BDE vApp.

2. Add entries like "10.0.0.1 hadoop001.domain.com ... 10.0.0.XYZ hadoopXYZ.domain.com" into /etc/hosts. As a result, Hadoop will lookup FQDN from /etc/hosts instead of the DNS Server.

Note:. when modifying the hadoop-template, you must follow "Maintain a Customized Hadoop Template Virtual Machine" pubs.vmware.com...74B8C10F1C.html . After power-off the hadoop-template, remember to restart tomcat server on BDE Server.

-Jesse

Cheers, Jesse Hu
Reply
0 Kudos
CalicoJack
Enthusiast
Enthusiast
Jump to solution

Yes, adding the potential guests to the host table is a good work around until we can resolve the guests to DNS. 

Thanks -

Reply
0 Kudos
stevehoward2020
Enthusiast
Enthusiast
Jump to solution

I don't see how adding a reverse pointer record helps in this case.  The hosts are deployed with only an IP address rather than a hostname.

When you say...

Add entries like "10.0.0.1 hadoop001.domain.com ... 10.0.0.XYZ hadoopXYZ.domain.com" into /etc/hosts. As a result, Hadoop will lookup FQDN from /etc/hosts instead of the DNS Server.

...from where are you getting the hostnames?

Reply
0 Kudos
stevehoward2020
Enthusiast
Enthusiast
Jump to solution

We figured this out.  The issue is the documentation is not clear, or at least we don't think it is.  On page 124 of the following PDF link...

http://pubs.vmware.com/bde-2/topic/com.vmware.ICbase/PDF/vsphere-big-data-extensions-21-admin-user-g...

...is the following...

n (vSphere 5.5) If the vCenter Server Appliance is deployed into an OVF environment that has a static IP network configuration and a blank host name, the reverse lookup from the IP address cannot be performed. Without the reverse lookup, the host name is incorrectly set for the vCenter Server Appliance.

I don't even see where we can set a non "blank host name", so unless we can, I would leave that out.  In my mind, it should more clearly state that the hostname is actually set based on the name fetched from the PTR record.  Initially, we thought it was normal that the hostname was set to the IP address.  We ended up creating foobar1.localdomain, foobarx.localdomain in /etc/hosts, resuming the failed deployment, and noticed the hostname was set to the IP in /etc/hosts for $(hostname).

To others this may be obvious, but it seems like a trivial thing to add something like the following to the documentation just to make it clearer.

"A hostname is not set on the deployed VM's by default.  Instead, the name is fetched from DNS or the local hosts file for the IP assigned during deployment.  If the name cannot be set, the hostname will be the IP address of the deployed guest and issues may result.", or something similar.

Thanks,

Steve

Reply
0 Kudos
CalicoJack
Enthusiast
Enthusiast
Jump to solution

I got the deployment to work by adding generic names to the /etc/host table of the clone and master

The host names were made up and generic. 

from /etc/hosts -

10.111.0.35 rthadoop1.company.com     #master

10.111.0.37 hadoop1.company.com

10.111.0.38 hadoop2.company.com

10.111.0.39 hadoop3.company.com

10.111.0.40 hadoop4.company.com

It was also suggested that generic names for the IP pool be put in DNS.

Nancy

Reply
0 Kudos
jessehuvmw
Enthusiast
Enthusiast
Jump to solution

Hi Steve,

This is exactly how BDE works: "A hostname is not set on the deployed VM's by default.  Instead, the name is fetched from DNS or the local hosts file for the IP assigned during deployment.  If the name cannot be set, the hostname will be the IP address of the deployed guest and issues may result."  This is how the DNS lookup work: lookup in /etc/hosts first, if not found, lookup from DNS Server, if both failed, set the IP as hostname.

"...from where are you getting the hostnames?"  :  as Nancy replied, the hostname is defined by you (the users), any valid hostname is OK.

We strongly suggest configure your DNS Server to provide forward and reverse FQDN/IP resolution (i.e. adding A and PTR records for all the IPs allocated to the VMs). Because /etc/hosts only works within the VMs, but from outside of the VMs, the hostname of the VMs doesn't pingable unless you add all the same entries in /etc/hosts into /etc/hosts on your local machine.

Hi Nancy,

Thanks for sharing your experience.

Thanks

Jesse

Cheers, Jesse Hu
Reply
0 Kudos
stevehoward2020
Enthusiast
Enthusiast
Jump to solution

Hi Jesse,

Thanks to you and Nancy for confirming.

As I mentioned, it may just be us, but we didn't immediately see the connection between the IP as the hostname, and the PTR record in DNS.  I would just throw out my $.02 that you guys should add to the documentation something similar to what I had in my previous response, as we spun a large number of wheels until we connected the dots.

Thanks again,

Steve

Reply
0 Kudos
jessehuvmw
Enthusiast
Enthusiast
Jump to solution

Hi Steve,

Thanks for the feedback. We will definitely take it into consideration.

BTW, in your company network, are you able to configure the DNS to provide forward and reverse FQDN/IP resolution ? Or modify the /etc/hosts of the template VM instead ?   BTW, we plan to support Dynamic DNS in next release, so the user doesn't need to configure the DNS manually.

Here is another workaround if you are deploying a Hadoop 2.3.0+ cluster (no need to configure DNS for FQDN/IP resolution):

1. Login BDE Server as user serengeti

2. Open file /opt/serengeti/chef/cookbooks/hadoop_cluster/templates/default/hdfs-site.xml.erb

3. Add the following content before the line "<!-- properties specified by users -->" near the end and save the file

<property>

  <name>dfs.namenode.datanode.registration.ip-hostname-check</name>

  <value>false</value>

  <description>

  If true (the default), then the namenode requires that a connecting

  datanode's address must be resolved to a hostname. If necessary, a reverse

  DNS lookup is performed. All attempts to register a datanode from an

  unresolvable address are rejected.

  It is recommended that this setting be left on to prevent accidental

  registration of datanodes listed by hostname in the excludes file during a

  DNS outage. Only set this to false in environments where there is no

  infrastructure to support reverse DNS lookup.

  </description>

</property>

4. Run:  sed -i 's|validate_fqdn_resolution|#validate_fqdn_resolution|' /opt/serengeti/chef/cookbooks/hadoop_cluster/recipes/default.rb

5. Run:  knife cookbook upload hadoop_cluster -V

6. Create a new cluster.

You can find more details about the solution provided for Hadoop 2.3.0+ on https://issues.apache.org/jira/browse/HDFS-5338

Cheers, Jesse Hu
Reply
0 Kudos
jessehuvmw
Enthusiast
Enthusiast
Jump to solution

Hi Steve,

The doc you pointed to  Cannot Perform Serengeti Operations after Deploying Big Data Extensions is talking about the FQDN for the vCenter Server, not for BDE Server and the VMs created by BDE. So it's not a correct place to add the descrition about how FQDN is assigned to the VMs created by BDE.

We will add the related FQDN explaination in the doc of next BDE release.

Cheers, Jesse Hu
Reply
0 Kudos