I would dedicate the 2 10Gb NICs to iSCSI and use your 1Gb NICs for everything else.
10Gb is supported by ESXi 5.x, and it absolutely has been done before. I would still recommend a few more pNIC's. What kind of storage are you using, what traffic is intended for the 10Gb NIC's?
We use a combination of Compellant and EVA storage. basically our network guys are upgrading all our connections and my boss is looking to switch our current setup. I think we're just going to have to tag a lot of networks on both connections instead of spreading things out like we currently have. Not sure how much an improvement this will be network wise as I am new to the 10gb game.
All connected via iSCSI? I would push for at least 2 cards with 2 10Gb ports each per host. This way a single card failure doesn't bring down your host. In my last deployment with iSCSI we used dual 4 port cards which had 2x 1Gb and 2x 10Gb ports. Price isn't that much more and gives you added flexibility on your design. We used the 10Gb for the iSCSI traffic and VM traffic, the 1Gb's were for vMotion and Management.
Here's another thread that may help: 4 x 10GB NIC's - Best Practices
We use both iSCSI and fibre channel. We will be using 2 10gb NICs but have two extra 1gb NICs isn't a bad idea considering we have those in place anyways. Could I get away with 2 10gb NICs?
You can but for redundancy it would be best if they were on separate cards.
Just confirmed that they are indeed two separate cards. Golden I think?
I would dedicate the 2 10Gb NICs to iSCSI and use your 1Gb NICs for everything else.
Will talk about our options. Thanks a lot!