VMware Cloud Community
dvnagesh
Contributor
Contributor
Jump to solution

Unrecognized field \"networkName\" when calling Create Cluster Work flow from VCO.

Hi,

I'm using BDE 1.1 on vsphere 5.5. I have deployed BDE VCO Plugin version 0.5.0.70. When I try running create basic Hadoop cluster workflow from VCO, it fails with the below error. Can someone help?

Content as string: {"code":"BDD.BAD_REST_CALL","message":"Failed REST API call: Could not read JSON: Unrecognized field \"networkName\" (Class com.vmware.bdd.apitypes.ClusterCreate), not marked as ignorable\n at [Source: org.apache.catalina.connector.CoyoteInputStream@7e542721; line: 1, column: 578] (through reference chain: com.vmware.bdd.apitypes.ClusterCreate[\"networkName\"]); nested exception is org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field \"networkName\" (Class com.vmware.bdd.apitypes.ClusterCreate), not marked as ignorable\n at [Source: org.apache.catalina.connector.CoyoteInputStream@7e542721; line: 1, column: 578] (through reference chain: com.vmware.bdd.apitypes.ClusterCreate[\"networkName\"])"}

Thanks and Regards,

Nagesh

24 Replies
dvnagesh
Contributor
Contributor
Jump to solution

HI Deepak,

you need to parse json values that might be specific to the distribution that you wanted to deploy within vco workflow. Default specs are for apache hadoop.

regards,

venkat

0 Kudos
jessehuvmw
Enthusiast
Enthusiast
Jump to solution

As venkat mentioned, the default clsuter spec (if you don't specify) is for Apache hadoop 1.2 (i.e. MRv1), while PHD only support MRv2, So you need to put the default MRv2 cluster spec in the json. You can find the sample spec in /opt/serengeti/samples/default_hadoop_yarn_cluster.json.

BTW, only "distro":"PivotalHD" is needed, not "distroVendor"

-Jesse

Cheers, Jesse Hu
0 Kudos
admin
Immortal
Immortal
Jump to solution

Hi mapdeep,

    Can you past the content of "/opt/serengeti/logs/serengeti.log" for us to analysis?

 

    I tested rest APIs with CDH distro, it can works, the following is my steps FYI:

#####################

1)

serengeti>distro list

  NAME    VENDOR  VERSION  HVE    ROLES                                                                                                                                                                                                                                                       

  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

  apache  Apache  1.2.1    true   [hadoop_client, hadoop_datanode, hadoop_jobtracker, hadoop_namenode, hadoop_tasktracker, hbase_client, hbase_master, hbase_regionserver, hive, hive_server, pig, zookeeper]                                                                                 

  cdh     CDH     4.6.0    false  [hadoop_client, hadoop_datanode, hadoop_jobtracker, hadoop_journalnode, hadoop_namenode, hadoop_nodemanager, hadoop_resourcemanager, hadoop_tasktracker, hbase_client, hbase_master, hbase_regionserver, hive, hive_server, pig, zookeeper]                 

  hw      HDP     1.3      true   [hadoop_client, hadoop_datanode, hadoop_jobtracker, hadoop_namenode, hadoop_tasktracker, hbase_client, hbase_master, hbase_regionserver, hive, hive_server, pig, zookeeper]                                                                                 

2)  login and get cookies:

curl -i -3 -c cookies.txt -X POST https://127.0.0.1:8443/serengeti/j_spring_security_check -d 'j_username=user&j_password=password' --insecure --digest

replace the bold text with your BDE server's IP address, VC's username & password

$ cat cookies.txt

# Netscape HTTP Cookie File

# http://curl.haxx.se/rfc/cookie_spec.html

# This file was generated by libcurl! Edit at your own risk.

127.0.0.1 FALSE /serengeti TRUE 0 JSESSIONID 717FBCBA96D1161F23991A2AC30EBF24

3) create a cdh cluster:

$ curl -i -H "Content-type:application/json" -3 -b cookies.txt -X POST -d "@rest01.json" https://127.0.0.1:8443/serengeti/api/clusters --insecure --digest

HTTP/1.1 100 Continue

HTTP/1.1 202 Accepted

Server: Apache-Coyote/1.1

Location: https://127.0.0.1:8443/serengeti/api/task/48

Content-Length: 0

Date: Fri, 28 Mar 2014 02:25:40 GMT

$ cat rest01.json

{

   "name":"cdh",

   "externalHDFS":null,

  "distro":"cdh",

   "distroVendor":"CDH",

   "networkConfig": {"MGT_NETWORK":["defaultNetwork"]},

   "topologyPolicy":"NONE",

   "nodeGroups":[

      {

         "name":"master",

         "roles":[

            "hadoop_namenode",

            "hadoop_jobtracker"

         ],

         "cpuNum":2,

         "memCapacityMB":3500,

         "swapRatio":1.0,

         "storage":{

            "type":"LOCAL",

            "shares":null,

            "sizeGB":10,

            "dsNames":null,

            "splitPolicy":null,

            "controllerType":null,

            "allocType":null

         },

         "instanceNum":1

      },

      {

         "name":"worker",

         "roles":[

            "hadoop_datanode",

            "hadoop_tasktracker"

         ],

         "cpuNum":1,

         "memCapacityMB":3748,

         "swapRatio":1.0,

         "storage":{

            "type":"LOCAL",

            "shares":null,

            "sizeGB":10,

            "dsNames":null,

            "splitPolicy":null,

            "controllerType":null,

            "allocType":null

         },

         "instanceNum":3

      },

      {

         "name":"client",

         "roles":[

            "hadoop_client",

            "pig",

            "hive",

            "hive_server"

         ],

         "cpuNum":1,

         "memCapacityMB":3748,

         "swapRatio":1.0,

         "storage":{

            "type":"LOCAL",

            "shares":null,

            "sizeGB":10,

            "dsNames":null,

            "splitPolicy":null,

            "controllerType":null,

            "allocType":null

         },

         "instanceNum":1

      }

   ]

}

0 Kudos
mapdeep
Enthusiast
Enthusiast
Jump to solution

Hi BXD/venkat/Jesse

Thanks for the help.

Now i am able to deploy the Pivotal HD cluster after changing the workflow(Execute create cluster operation) script as below.

-----

js={"name":name,

"nodeGroups":[

{"name":"master",

"roles":["hadoop_namenode","hadoop_resourcemanager"],

"cpuNum":MasterCPUNumber,

"memCapacityMB":MasterMemoryCapacityMB,

"swapRatio":1.0,

"storage":{"dsNames":masterDsNames,

"sizeGB":MasterStorageSizeGB},

"instanceNum":1},

{"name":"worker",

"roles":["hadoop_datanode","hadoop_nodemanager"],

"cpuNum":WorkerCPUNumber,

"memCapacityMB":WorkerMemoryCapacityMB,

"swapRatio":1.0,

"storage":{"dsNames":workerDsNames,

"sizeGB":WorkerStorageSizeGB},

"haFlag":"off",

"instanceNum":WorkerInstanceNumber},

{"name":"client",

"roles":["hadoop_client","pig","hive","hive_server"],

"cpuNum":ClientCPUNumber,

"memCapacityMB":ClientMemoryCapacityMB,

"swapRatio":1.0,

"storage":{"dsNames":clientDsNames,

"sizeGB":ClientStorageSizeGB},

"instanceNum":ClientInstanceNumber}],

"networkConfig": { "MGT_NETWORK": [networkName]},

"rpNames":[resourcePoolName],

"distro":"PivotalHD",

"distroVendor":"PHD",

};

-----

Regards,

Deepak

0 Kudos
admin
Immortal
Immortal
Jump to solution

Great to hear that!

0 Kudos