VMware Cloud Community
xianmacx
Contributor
Contributor
Jump to solution

How many VM's to saturate 1gb ISCSI?

Hello everyone,

I know this question will have alot of "depends" type scenarios, but I am just trying to get a ballpark of whats realistic.

Lets assume I have mostly low utilized vm's.  Web servers, DC's etc.  No Database servers or real high storage intensive VM's.


Approx how many vm's could you have running over that link before its saturated and causes performance problems on the server?  Also, the SAN will not be a bottleneck in this question.

10vm per 1gb link?

Again, I know there are alot of "depends" but just trying to get an idea.


Thanks,

Ian  

0 Kudos
1 Solution

Accepted Solutions
mcowger
Immortal
Immortal
Jump to solution

xianmacx wrote:

Thanks for the response...So maybe a better questions.  How can I tell what percent that link is utilized?

Just watch the transmit/recieve rates on the NIC?

Exactly

If I did my math right, a 1gb link *could* push around 125mb per second, or 125000 KBps?

From a physical perspective, yes.  Dont forget ethernet/TCP/IP overheads.  Effective throughput is more like 112MB/s

So if my vmnic usage shows 90,000KBps usage, I can say that link almost totally saturated?

Yup. Tells me you dont have 'lightly' utilized VMs.

Thanks again,

Ian

--Matt VCDX #52 blog.cowger.us

View solution in original post

0 Kudos
10 Replies
mcowger
Immortal
Immortal
Jump to solution

What you are really asking for here is a survey of what people have, not how many you can fit, because the answer is onviously (as you note), it depends.

*Personally* I have seen a single (heavy) VM do it.  I've also put 50 VMs on a host with less than 20% usage.

--Matt VCDX #52 blog.cowger.us
xianmacx
Contributor
Contributor
Jump to solution

Thanks for the response...So maybe a better questions.  How can I tell what percent that link is utilized?

Just watch the transmit/recieve rates on the NIC?

If I did my math right, a 1gb link *could* push around 125mb per second, or 125000 KBps?

So if my vmnic usage shows 90,000KBps usage, I can say that link almost totally saturated?

Thanks again,

Ian

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

xianmacx wrote:

Thanks for the response...So maybe a better questions.  How can I tell what percent that link is utilized?

Just watch the transmit/recieve rates on the NIC?

Exactly

If I did my math right, a 1gb link *could* push around 125mb per second, or 125000 KBps?

From a physical perspective, yes.  Dont forget ethernet/TCP/IP overheads.  Effective throughput is more like 112MB/s

So if my vmnic usage shows 90,000KBps usage, I can say that link almost totally saturated?

Yup. Tells me you dont have 'lightly' utilized VMs.

Thanks again,

Ian

--Matt VCDX #52 blog.cowger.us
0 Kudos
xianmacx
Contributor
Contributor
Jump to solution

Thank you very much!  

0 Kudos
JedwardsUSVA
Enthusiast
Enthusiast
Jump to solution

You can likely do more than 10 VMs.  I would imagine if you don't have the numbers already, you probably don't have anything too I/O intense.  Providing you are able to reach 1 Gbps, your have a separate connection for VMotion traffic, and your average 17 Mbps in disk I/O per VM (which is probably higher than your average), your environment could theoretically hold a maximum of 59 VMs.  Of course, you could have a VM that throws that completely out the window.

What type of servers are in your environment?      

0 Kudos
xianmacx
Contributor
Contributor
Jump to solution

You are correct, nothing too I/O intensive. Right now just some web servers, and infrastucture servers.

More than anything, now I understand how what numbers to be looking at and how that translates to utilization.

So lets say i see that my nic is near the 100,000KBps utilization.  If I am to add a second nic, with RR iscsi binding to my array, will I effectively now have 2gb of storage throughput?  I know they are seperate links, so no 1 connection can push more than the 1 Nic at a time, but different VM's will spread the traffic over both links.  Again, I know its not going to break one conversation accross the links, but VM1 can use NIC1 and VM2 can use NIC2 etc?

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Be careful - the way that storage protocols and the way that VMs use a RR vSwitch are very different.

You mentioned VMs earlier - now you mention storage.  Are these NICs/vSwitches used for VMs or for storage access?

--Matt VCDX #52 blog.cowger.us
0 Kudos
xianmacx
Contributor
Contributor
Jump to solution

All storage traffic.

Using ISCSI to bind the ethernet NICS.  NetApp storage.  All VM traffic is seperate.

trying to compare (4) 1gb isci links against a 4gb fc link for example.

If very few if any VM's could push a 1gb link on their own,  how much different really are the 2 performance wise.


sorry for the confusion,

Ian

0 Kudos
JedwardsUSVA
Enthusiast
Enthusiast
Jump to solution

Hands down, I would go with FC.  iSCSI works fine, but FC will be far more reliable and faster if you can go with 2 x 4 Gbps FC HBAs.  If you go that route, you probably won't have to worry about I/O.  8 Gpbs is available for a little more $.  If you are on a tight budget and can't get 2 HBAs, then iSCSI will do the trick, just not as well.  

0 Kudos
JedwardsUSVA
Enthusiast
Enthusiast
Jump to solution

Oh an one more thing, FCoE is the best option if you got the dough!

0 Kudos