<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: vSphere / ESX 4.1 and 10Gbit NIC in ESXi Discussions</title>
    <link>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348735#M16647</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes, that's pretty much what I would recommend in your case.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Don't forget about Service Console/vmkernel management and put that primariliy on NIC2 too.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, be aware that VMotion can really chew up a lot of bandwidth, so if your VMs are heavily utilization the network, performance could be affected negatively during VMotions. But 10Gbit/s still is a lot, so I doubt it will be a noticeable unless DRS constantly initiates a lot of VMotions on your cluster.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 28 Apr 2011 09:20:36 GMT</pubDate>
    <dc:creator>MKguy</dc:creator>
    <dc:date>2011-04-28T09:20:36Z</dc:date>
    <item>
      <title>vSphere / ESX 4.1 and 10Gbit NIC</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348730#M16642</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;on of our customer's is planning a new environment with 10Gbit Ethernet. A vSphere cluster based on HP hardware will be setup and managed by us. Due to costs it's planned to reduce the 10Gbit Ethernet ports&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question is, if it's a good idea to run iSCSI, vMotion and the customer-LAN on a trunk of two 10Gbit ports per host? Is this supported or is it recommended to use more NICs?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any help appreciated.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Kind regards&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 27 Apr 2011 08:41:47 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348730#M16642</guid>
      <dc:creator>GreyhoundHH</dc:creator>
      <dc:date>2011-04-27T08:41:47Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere / ESX 4.1 and 10Gbit NIC</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348731#M16643</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;A config with 2 physical 10G NICs like that is fully supported, although not optimal in some cases.&lt;/P&gt;&lt;P&gt;As the general best practice which you are probably aware of already, seperate iSCSI and VMotion traffic on isolated, non-routed VLANs and put ESX(i) management interfaces and VMs on their respective own VLAN too.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If you have something like HP blades with HP Virtual Connect Flex-10, you can split the phyiscal NICs into 2x4 "sub-NICs" with their own custom speeds too. This would allow you to handle ESX-side&amp;nbsp; networking as if you had 2 quadport NICs.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If you have Enterprise plus licensing with dvSwitches, you can also use the new 4.1 feature Network IO Control to soft-control bandwidth of the specific interfaces like iSCSI, VMotion etc:&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf"&gt;http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="http://geeksilver.wordpress.com/2010/07/27/vmware-vsphere-4-1-network-io-control-netioc-understanding/"&gt;http://geeksilver.wordpress.com/2010/07/27/vmware-vsphere-4-1-network-io-control-netioc-understanding/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If you can do neither of those and use standard vSwitches, you could also consider setting a preferred active uplink on the iSCSI-vmkernel interface with the other uplink being standby only. Do the same vice-versa for the other port groups.&lt;/P&gt;&lt;P&gt;This way, unless one Uplink fails, you will always have a guaranteed, dedicated 10G connection for iSCSI regardless of any ongoing VMotion and/or VM traffic. You can't do proper ESX-side iSCSI multipathing with this configuration though, so you are bound to 10G for iSCSI at any time (which should suffice in most cases).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 27 Apr 2011 10:42:22 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348731#M16643</guid>
      <dc:creator>MKguy</dc:creator>
      <dc:date>2011-04-27T10:42:22Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere / ESX 4.1 and 10Gbit NIC</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348732#M16644</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;if you have this source of adapter, its a good solutions nic teaming with two adapters that provide redundacy. but how said, iscsi with dedicated adapter is the best practices,&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 27 Apr 2011 11:02:49 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348732#M16644</guid>
      <dc:creator>MauroBonder</dc:creator>
      <dc:date>2011-04-27T11:02:49Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere / ESX 4.1 and 10Gbit NIC</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348733#M16645</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You fully retain redundancy with the above example too, it's just that you designate one uplink as a dedicated standby NIC, so only in the event of an uplink failure, sharing of one 10G link for all networks occurs.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 27 Apr 2011 11:13:50 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348733#M16645</guid>
      <dc:creator>MKguy</dc:creator>
      <dc:date>2011-04-27T11:13:50Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere / ESX 4.1 and 10Gbit NIC</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348734#M16646</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;OK, thanks for your input, that helped a lot. My main concern was if a setup like this is supported by VMware.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I think we will only use the Enterprise version, so we would have to go with Standard vSwitches. Using the described setup with a different active preferred uplink for the portgroups seems to be a good idea. Just to sum it up, you would recommend the following:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Portgroup: iSCSI -&amp;gt; preferred active uplink NIC1, standby NIC2&lt;/P&gt;&lt;P&gt;Portgroup: vMotion-&amp;gt; preferred active uplink NIC2, standby NIC1&lt;/P&gt;&lt;P&gt;Portgroup: customer-LAN-&amp;gt; preferred active uplink NIC2, standby NIC1&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Kind regards&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 28 Apr 2011 06:55:40 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348734#M16646</guid>
      <dc:creator>GreyhoundHH</dc:creator>
      <dc:date>2011-04-28T06:55:40Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere / ESX 4.1 and 10Gbit NIC</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348735#M16647</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes, that's pretty much what I would recommend in your case.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Don't forget about Service Console/vmkernel management and put that primariliy on NIC2 too.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, be aware that VMotion can really chew up a lot of bandwidth, so if your VMs are heavily utilization the network, performance could be affected negatively during VMotions. But 10Gbit/s still is a lot, so I doubt it will be a noticeable unless DRS constantly initiates a lot of VMotions on your cluster.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 28 Apr 2011 09:20:36 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/vSphere-ESX-4-1-and-10Gbit-NIC/m-p/348735#M16647</guid>
      <dc:creator>MKguy</dc:creator>
      <dc:date>2011-04-28T09:20:36Z</dc:date>
    </item>
  </channel>
</rss>

