VMware Cloud Community
portlandjoe
Contributor
Contributor
Jump to solution

How to get 10GigE vmnic connection?

So without all the background and setup information, I guess this should be a simple question to start off.  If the answer is yes, I'll have a lot of other questions.  Basically, can a vmnic be created that utilizes 10GigE bandwidth with the server itself only having 1 GB adapters?

If this can be done, how?  I'm using VSphere and ESXi 4.0.  When I try create an adapter using the GUI I get only 1000MB full duplex and less as the option.

Thanks!

Joe

0 Kudos
1 Solution

Accepted Solutions
wdroush1
Hot Shot
Hot Shot
Jump to solution

You'll get 10GB between the VMs on the vSwitch, as with regular network connections, you're limited by the slowest point between the two machines, so anything coming out of the host is going to be limited to 1GB.

View solution in original post

0 Kudos
8 Replies
wdroush1
Hot Shot
Hot Shot
Jump to solution

You'll get 10GB between the VMs on the vSwitch, as with regular network connections, you're limited by the slowest point between the two machines, so anything coming out of the host is going to be limited to 1GB.

0 Kudos
portlandjoe
Contributor
Contributor
Jump to solution

Okay so here's my scenario.  One server, no outside connections other than my management connection.  No iSCSI, no fiber, all local storage.  Just a single host using local disk.  The NICs on the physical host are all 1Gb.  In VSphere the network adapters show as 1000 Full.  The NICs on the VMs (In Windows), if I use VMXNET 3, will allow me to set it to 10GbE full duplex.

If VM 1 is an app server and VM 2 is a database server, and they're communicating with eachother via the virtual network, I should be able to obtain speeds much greater than 1Gb?  That's what I'm looking for.

Thanks!

Joe

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

If those 2 VMs are on the same vSwitch, yes, they will commnicate much faster than the speed of the uplink.

--Matt VCDX #52 blog.cowger.us
portlandjoe
Contributor
Contributor
Jump to solution

Alright, so do I have to create an isolated network away from the management traffic network to obtain this, or will they just natively communicate in the highest bandwidth as possible?

Thanks again folks!

Joe

0 Kudos
wdroush1
Hot Shot
Hot Shot
Jump to solution

Though 10gb is a lot of database traffic, it wont be "faster" until you start burying a gigabit.

But yeah, that configuration can get you 10gb ethernet traffic.

mcowger
Immortal
Immortal
Jump to solution

No need for dedicated vSwitch - if they are on the same PG/vSwitch, they will use internal only.

But William's point is good - 10GBit is ALOT of DB traffic....

--Matt VCDX #52 blog.cowger.us
0 Kudos
portlandjoe
Contributor
Contributor
Jump to solution

Well, what I'm really trying to solve is an issue with what I can only consider shared memory.  I have an application that is pretty dated in it's design.  Without modifying the app, I need to figure out a way to boost it's performance.  If I put it on the db server, performance is great.  If I split them, I'm not going to say it's bad, but I want it better.  Putting it on the db server doesn't scale.

The application uses a lot of cursors and so it has a lot of memory overhead.  From what I understand of the app/db relationship, if you put them on the same server as the db, it's using shared memory instead of ODBC and TCP/IP to do the work.  If you split the machines, all those cursors are transferring lots of little bits over ODBC to TCP/IP, back down through ODBC again, etc, etc.

My hope was that I could get two VM's on a 10Gbps channel to outperform what we've had so far with a split app/db. 

My test was to copy an 80GB file from one Vm to the other.  The best throughput I can get is just over 1Gbps.  I was really hoping to get 4 or 5 or 6 Gbps, but I'm going to start running some performance tests on the application at this speed to see if it makes a difference.

Thanks for responding.  This is a can of worms, I know.  My baby won't sleep, so I figured I'd drop something in here.

Joe

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

portlandjoe wrote:

My test was to copy an 80GB file from one Vm to the other.  The best throughput I can get is just over 1Gbps.  I was really hoping to get 4 or 5 or 6 Gbps,

When doing file copy for testing network performance you would have to look out for disk throughput limits. Are you sure your disk could deliver more reads and write than around 100 MB/s? You could look for some network benchmarking tool that only uses RAM.

My VMware blog: www.rickardnobel.se
0 Kudos