I can successfully create a cluster on the management network, but not on the 10Gig data network (SSO connect fails).
I also cannot get the cluster to use the 10G data network instead of the 1G management network.
Is there any documentation on how to modify a cluster to utilize a second network?
I interpret your question two ways:
1) You successfully deployed a cluster using the default network (Management Network) that was selected during vapp deployment. Now you want to create a second cluster, but deploy it on a network that is not the default network.
Answer: You must make the network known to Serengeti before you can use it to create a cluster
From the CLI: Network add --name "networkname" --dhcp --portGroup "portgroupname
Once you do this, you can create the cluster from the CLI or vCenter Plugin and reference the "networkname" defined about
2) You would like to separate network traffic so that mapreduce traffic and HDFS traffic are on separate networks.
Answer: In the BDE 1.0 version, this is a manual task. You must deploy the cluster on a single network, manually add a second network adapter to each of the cluster nodes, and then configure the interfaces witthin each VM.
The good news is that we will support automation of this configuration in next release do around the end of the year (2013).
I apologize for the lack of clarity, but your second answer is closer. Unfortunately, the configuration of the hadoop to do that is quite involved, and I seem to be running into conflicts with chef if I have to restart any VMs.
I seem to be having some success building the cluster on the 10gig network, and then when the build fails, adding the 1G network and resuming the build.
Thank you for your response, and I look forward to testing out your future efforts.
If you manually add a second network, you may have an issue when restarting the clusters. Basically the adapters may not map to the network interface defined within the vm in the same order as before you powered down. I believe if you make sure that you switch the adapters so that they map to the correct interface, this will work again. That is a workaround until we deliver muti-network support in the next release.
I am still having difficulties. Unfortunately, after creating the cluster, the commands from the serengeti cli fail. If I reboot the cluster, all nodes end up bootstrap_failed.
The network connectivity seems fine, but the hadoop and serengeti commands seem to have problems.
Anyways, if you need a beta tester for the multi-network please give me a holler.
could you attach /opt/serengeti/logs/ironfan.log ? So I can figure out the reason for your. The new ova including multi-network is not released yet, please wait for a while.
In ironfan.log, I found that the last cluster BDHadoop1 is created successfully, and each node has 2 IPs : 10.201.* and 192.168.*. What's the name of the cluster which failed? Usually when configuring multi networks for the VMs, you probably need to change the two files before creating the cluster:
Assuming your serengeti server has two network, one is public network, another is private network (connected to the nodes in the created cluster), then you need to change public ip to priviate ip in the 2 files above, because they will be used in the nodes to connect to the serengeti server