What is the best practice to put vmkernel ports on the nexus 1000v? Is it a good idea to put all vms and vmkernel ports all on the nexus 1000v? or should some vmkernel ports like management be on a standard or dvs switch? If something happens to the 1000v, all management and vms will be inaccesible.
any tips?
Yep, that's correct. System port profiles do not require any communication between the VEM and VSM.
You'll want to make sure the vmkernel interfaces are backed by system VLANs on the 1000v (as opposed to regular VLANs). System VLANs are fully functional in headless mode, whereas you have some limitations on regular VLANs. In a nutshell, with the VSMs offline, existing connections will be maintained for ports backed by regular VLANs, but new connections (including reconnects due to VM restarts, or host restarts) will fail. See this for more info:
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html#wp9000293
If you have sufficient uplinks available, another option as you mentioned is to use a VSS/VDS. Either way, don't forget to protect your VSMs' vNICs as well as the vmkernel interfaces.
I havent really tried shutting down the vsms and check to see if I still have connectivity on the vmkernel nics. The vmkernel nics are on the nexus 1000v and are defined as system vlans. Does that mean if the VSM goes offline (both primary and secondary) I will still be able to communicate with the esxi host if its on the 1000v portgroup defined as a system vlan?
Yep, that's correct. System port profiles do not require any communication between the VEM and VSM.