As you have probably discovered already your best bet is to use Enterprise Plus licensing so that you can take advantage of Network I/O control to split up the 10Gb NICs and share them with iSCSI, NFS, vMotion, virtual machine and management.
Given your other options I would create two switches. One for Management and vMotion with the 2x1Gb uplinks. Configure this active/standby with management active on one NIC and standby on the other and the opposite for vMotion so that each 1Gb NIC is dedicated to a single task but still has the option to fail over should it be required.
With the second switch add both 10Gb uplinks and create two iSCSI vmkernel ports. Set each uplink active on one portgroup and unused on the other one and then enable iSCSI binding. See the pic below for an example.
Now create additional portgroups for NFS and virutal machine traffic and assign them to this same switch. Set NFS uplinks as active/standby and virtual machine to route based on port id. Make sure to give each traffic type a different VLAN ID.
If you are familiar with QoS on the Cisco you can prioritise iSCSI, NFS and virtual machine traffic accordingly so as to make sure that iSCSI traffic is always available. You can take this a step further by setting transmit rate limits on the virtual machine portgroup traffic so that it doesn't consume all 10Gb of the uplink.
I suggest you take a read of this document if you are not familiar with iSCSI configuration.Simon Greaves
Thanks for the suggestion. I figured I would end up with something like this. I still have some testing to do but this will help a lot.