VMware Cloud Community
kfeina
Enthusiast
Enthusiast

1GB iSCSI - VM inside.

Hello; I have a question:

If you have an ESX 3.0 with a 1 Gb card (for iscsi traffic - software initiator) , and on the other side, you have an storage array like netapp fas270 with a 1 Gb for iscsi traffic,

how many VM can you put inside the storage before virtual machines begin to crash?

It is a good rule think that 1 vm will need 100Mb so with a 1 Gb iscsi you can run 10 VM inside the storge? More? Less?

Will ESX distribuite the load evenly through the iscsi ?

Thanks a lot.

0 Kudos
10 Replies
bryanwmann
Enthusiast
Enthusiast

Let me see if I have this correct.

1. You have a 1GB NIC that you will be using for iSCSI traffic.

2. The NAS device has a 1gb uplink as well.

How big is the iSCSI target?

A good practice is usually 20-25 vms per target(LUN). This is due not to the actual disk size needs, but the locking of the target for Metadata Updates which puts a lock on the whole target. If you get a lot of VMs on a target and a lot of operations that cause the Metadata to be updated-causing locking -you will see performance issues in the VMs.

0 Kudos
bertdb
Virtuoso
Virtuoso

kfeina, what makes you think that virtual machines would start to crash if you have slow (or overloaded) storage ?

0 Kudos
Jae_Ellers
Virtuoso
Virtuoso

Depends on your backing disks. Do you have 15k 140G FC disks or 7200 300G SATA disks or ??? How many spindles in your aggregate?

On NetApp you need good disks to have iSCSI work well for VMware if it's doing any other work. We tanked a 3050 running 40 VMs while also using it for production clearcase vob & view remote pool storage for 100s of users, not to mention the TBs of CIFS & NFS file shares it was managing. But it was all on big SATA disks, so the system was spending too long reading that data and dragged the whole system down.

You need to monitor your performance on the NetApp and start bringing VMs up.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
0 Kudos
kfeina
Enthusiast
Enthusiast

1. Yes, I have a 1GB NIC that I will user for ISCSI traffic

2. Yes, I have a NAS DEVICE (NetappFAS270) that has a 1 gb uplink, and I want my VM run insede it.

My question is:

With a 1 Gb for ISCSI traffic, how many VM can I run before saturate the 1 Gb?

I suppose that my bottleneck will not be the Netapp. I suppose that my bottleneck will be the 1 Gb link.

Thanks a lot.

0 Kudos
kfeina
Enthusiast
Enthusiast

kfeina, what makes you think that virtual machines would start to crash if you have slow (or overloaded) storage ?

Because my VMs will run inside the storage, and perhaps ESX will lose the contact for performance questions through the 1 Gb link.

0 Kudos
bryanwmann
Enthusiast
Enthusiast

The issue is rarely bandwidth over the 1GB link, the issues usually occur because the storage target is not configured properly for the applications using it.

From searchstorage.com SAN All-in-One Guide SAN:Chapter 2 page 11[/b].

http://viewer.bitpipe.com/viewer/viewDocument.do?accessId=5758465

Bandwidth has an impact on performance when large requests are being processed.

In this case, most of the work is spent transferring the data over the network making bandwidth the critical

path. However, for smaller read and write requests the storage system spends more time accessing data making

the CPU, cache memory, bus speeds and hard drives more important to overall application performance.

Unless you have a bandwidth intensive application (e.g., streaming media or backup data) the difference in

performance will be minimal. Enterprise Strategy Group (ESG) Lab has tested storage systems that support

iSCSI and FC and the performance difference is minimal — ranging between five- and 15%.

In fact, an iSCSI storage system can actually outperform a FC-based product depending on other, more

important factors than bandwidth — including the number of processors, host ports, cache memory and

disk drives and how wide they can be striped.

The slowest component of the storage performance chain is the hard disk drives.

So if you have any applications like mentioned above you may want to be sure they use a seperate link if available.

0 Kudos
java1313
Contributor
Contributor

I'm looking at Equallogic iSCSI storage PS300E. This SAN can perform a lot of operations:

60,000 IOPS and 300 MB/sec.

We have servers like this: 2 xeon, 8Gb ram (up to 16Gb), 2-4 sata ide, the latest version of the VMware Server.

Each server hosts 10-20 small VM (256-768Mb ram). These VM are used for software testing and development, i.e. usually there are no high load on VM.

I measured disk transfers/sec on my existing VMware Servers (VMS).

Average value is 400 with short spikes up to 1,000.

1 disk transfer/sec equals 1 IOps.

I.e. I can connect 20 VMware servers to the one storage like PS300E.

I suppose, ESX' IOps performance should be the same as for VMware Server on my servers.

But I don't believe, there will not exist bottlenecks.

I would like to know, how many VM can use on storage via single 1Gb network ?

Is there any another possible bottlenecks ?

Please share your positive or negative feedback about SATA iSCSI SAN like my requirements.

Thanks.

0 Kudos
femialpha
Enthusiast
Enthusiast

I would not recommend designing your storage based on the marketing numbers on the EQL website. Those numbers are taken from cache and in no way represent real life scenarios. Take a look at this thread so see what would be more realistic.

The number of vm's on any array will ultimately be determined by what kind of load they will be doing.

http://www.vmware.com/community/thread.jspa?threadID=73745&tstart=0

Message was edited by:

femialpha

0 Kudos
christianZ
Champion
Champion

Do you mean iscsi or nas here ??

For both is valid - your vms will be slower / the response time will be higher but they won't crash.

I would say pro 1 x Gb link you can serve 10-20 vms - that depends of course what they do.

Remember nas seems to be a little slower as iscsi.

0 Kudos
christianZ
Champion
Champion

As femialpha mentioned check that thread -

but remember the VMWare Server seems to be much slower as ESX (compare the numbers).

The theoretical numbers from EQL are a little amusing - the same for all models (sata, sas 10k, sas 15k).

0 Kudos