I am looking to buy a Dell EMC CX3-10c for two ESX 3 hosts. I assume I can hooks up the host via fibre directly as there are 4 fibre ports on the back I can attach one server to each of the head units.
Also I assume the SAS HDD are a requirement or has anyone else used the slower SATA drives with out any performance issues?
Is this ok? Has anyone else used this device or config?
Many Thanks
Direct-connect is hardly ever a good idea. Also, AFAIK, the Clariions are active/passive, so only two of your hosts would be able to see a particular LUN at a time (2 ports per controller, one controller is passive.) I therefore don't think direct connect will work in your case.
VMware also doesn't seem to certify any storage systems for use in direct-connect set-ups anymore.
For the Clariion to work at all with three hosts, you will need at least one FibreChannel switch and one HBA per host. Ideally, you should have two FibreChannel switches (one port from each controller to each switch) and two HBAs per host (one HBA per fabric.)
Take a look at the VMware SAN documents - they're really very good.
- Fibre Channel SAN Configuration Guide
- SAN System Design and Deployment Guide
I use a CX3-10c (2SP's) with two front end ESX. If you are hoping to set up a fault tolerant setup linking up the ESX and the CX it's easier to come up with a config based on iSCSI and some gigabit switches rather than FC which is going to require some FC hardware to get the same level of redundancy.
So I guess the ISCSI method might be the way to go. However if I did need to go the fibre root without a FC switch and used direct attached would it work and fail over?
Help me out there guys. I can't see why direct attach will not work as Stu has described?
If the CX310 has two SP's each with 2 FC ports (which these ones I have seen do) you can connect each of the ESX hosts to each SP, you will need a dual HBA or two singles. That gives you redundancy (each host is connected to head SP) and it works fine. However what you can't add is a third host or another FC device like a tape drive or VCB server. Once you need to scale up you will need some FC switches.
So where have I gone wrong.
I have not heard of VMware reducing support for direct connect, is this documented anywhere.
What you're describing should - in theory - work, provided you've got a maximum of two hosts. (The OP has three, however.)
For ESX 3, the only storage systems that officially had direct-connect support were the EMC AX100 and AX150. http://www.vmware.com/pdf/vi3_san_guide.pdf
For ESX 3.5, no storage systems have been certified for direct-connect. Yet. http://www.vmware.com/pdf/vi35_san_guide.pdf
As with many bits of hardware, you might get it to work but face potential disappointment if you ask for help with a problem.
Thanks for the reference, I had not noticed that there was no support for any in 3.5 before. When I last did this it was with a AX150 which as you say was supported. Always best to go with the HCL, you sleep much better. :smileygrin:
hmm... Looks like I might have to stick with ISCSI only.
(Sorry....) However if I did stick with a direct FC method then I should be able to use the ISCSI at the same time and therefore use VCB for backup!???
Well, I believe that you can now use iSCSI for VCB too (as of VCB 1.0.3.) So you could avoid FC altogether if you really needed.
Looking at the volume of traffic that VCB would generate, it would make sense to keep it on FC, though. Sounds like a plan.
Be careful about performance on iSCSI, however. It's reportedly possible to get fantastic performance, but it's also easy to get abysmal performance. Take care to go through the docs in detail. Also, Paul Lalonde has made some excellent posts on iSCSI on these forums - search for his posts in your research.
Disclosure: I'm an EMC employee...
Direct connect works fine, and has only been recently added to the HCL as an option (we're seeing more and more requests for this in remote office configs with AX4 and CX3-10 class platforms).
We're working like busy beavers behind the scenes to get this column filled (along with the MSCS column now with broader MSCS support in 3.5u1) in the HCL.....
SATA, SAS and FC drives can all perform well - but there's no magic. For IOPs workloads (for throughput in MBps workloads, SATA can rock), SAS 10K drives can do 2x the workload of a 7.2K SATA drive, and a 15K FC or SAS drive can do 3x, so it comes down to basic math. If you need IOPs, you can get it done with SATA, but only if you buy more (and at some point, the economics invert).
Generally, a good idea is to have some of each type, and tier workloads.
