VMware Cloud Community
smkrt
Contributor
Contributor
Jump to solution

Custom iSCSI SAN. Suggestions Needed.

I am planning to create a custom iSCSI shared storage for my Virtual Infrastructure, which will be used for V-Motion and other features of ESX 3.5 that needs SAN (Central Storage) for unlocking.

What am I looking into is replacing my current network card (Giga Bit Ethernet cards) with Fibre 10 Giga Network Card.

Connecting all of my server using Fibre Switch, and then Configure software based iSCSI Target storage on one of my server and configure all other to use the iSCSI Target using 10 G Fibre Network Card.

I need suggestion about the above mentioned idea, will it work or should I modify. The goal is to have Full funcational ESX 3.5 along with all fuctions of V-motion and DRS.

Please comment.

Thanks.

0 Kudos
1 Solution

Accepted Solutions
sstelter
Enthusiast
Enthusiast
Jump to solution

What do you plan do with your SAN? If you are running typical applications on your VMs, like SQL, Exchange, File and Print, you will find the bottleneck to be in the RAID controller and the number of disks that it controls. Yes, a 15k SAS drive can push a lot of throughput, but that throughput is based on a large transaction size and a finite number of I/O operations per second (IOPS). If your 15k SAS drive does 200 IOPS (a reasonable number) and the payload is Exchange (8kB transfers), then the throughput of your SAS drive is at most 200x8kB = 1.6MB/s. And that doesn't include any IOP penalties for RAID5. I would encourage you to consider investing in more drives rather than in an expensive fabric. Using the math above, you can quickly see how 10 drives won't fill even a single gigabit Ethernet link. On the other hand, if you expect to spend time pushing large sequential reads or writes with an application, throughput could be an issue. I suspect that such a payload is a special case and virtualizing a system with that kind of payload will not deliver the desired results no matter how big and expensive the fabric. I hope this helps...good luck to you...sounds like a fun project!

View solution in original post

0 Kudos
4 Replies
kjb007
Immortal
Immortal
Jump to solution

Make sure your 10G card is on the HCL. Other than that, all looks fine. Use the links below or iSCSI design and configuration guide. You should have a redundant path to the storage, and not rely on just one path.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
smkrt
Contributor
Contributor
Jump to solution

Thanks for the quick response, I would like to know that what would the throughput of the system when I will configure the network in this manner.

I means to say that I have 15000 RPM SCSI Drive with RAID 5 on the server that I am planning to use as iSCSI target, will I achive the a high performance that can be compared to real hardware based SAN?

0 Kudos
kjb007
Immortal
Immortal
Jump to solution

Check this thread (http://communities.vmware.com/thread/73745) It's very long, but includes a lot of real world results.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
sstelter
Enthusiast
Enthusiast
Jump to solution

What do you plan do with your SAN? If you are running typical applications on your VMs, like SQL, Exchange, File and Print, you will find the bottleneck to be in the RAID controller and the number of disks that it controls. Yes, a 15k SAS drive can push a lot of throughput, but that throughput is based on a large transaction size and a finite number of I/O operations per second (IOPS). If your 15k SAS drive does 200 IOPS (a reasonable number) and the payload is Exchange (8kB transfers), then the throughput of your SAS drive is at most 200x8kB = 1.6MB/s. And that doesn't include any IOP penalties for RAID5. I would encourage you to consider investing in more drives rather than in an expensive fabric. Using the math above, you can quickly see how 10 drives won't fill even a single gigabit Ethernet link. On the other hand, if you expect to spend time pushing large sequential reads or writes with an application, throughput could be an issue. I suspect that such a payload is a special case and virtualizing a system with that kind of payload will not deliver the desired results no matter how big and expensive the fabric. I hope this helps...good luck to you...sounds like a fun project!

0 Kudos