First background info, I have a client who does a lot of IO intensive applications. Currently they have 6 desktops running Win2008 - each have a core I7 and a 6 SSD drive array. My job is to consolidate these into one server. First attempt was a 4 socket, 64 core custom server. That was connected to a 24 disk fiber channel SAN. Problem we found out is that the software they use can't use hyper threading. So we have all this power and they can't use it. On the desktops they are using 4+ghz processors, the server is 2ghz per core - yes they can run more applications but that won't work for them... Anyways, so my current thought is a custom built 5 to 8 blade server. Each blade is going to be a core I7 4660K running at 4GHZ, 32GB of ram. This leads to the heart of my question. My plan would be to run these in a ESX cluster to spread the power out. How should I handle storage to give the best IO? Like I said I have a 24 SAS 6GB Drive array but that won't cut it. My options are A) put 3-4 SSD in each blade and run raid 0 B) get a 12-24 SSD san array and use 16GB or 8GB fiber channel. Do you think we can reach the same R/W speeds of their current computers with ether of these options?
Any help would be grateful. Thanks
8Gb FC has 1600 MB/s speed, if you want to use blade system, bandwidth will be divided to all servers and SAN performance on ESXi is depended to your array and your ESXi configurations such as queue depth. I think, you will reach to 600 MB/s on SSD (Sequential read) and 400~450 MB/s on SAS drives.
Definitely go with shared storage on the blades. An SSD array with 16GB FC will do you nicely. You can also use VMwares storage DRS to help keep the heavy IO hitters happy. Sounds like money isn't an issue so getting the IO you want is easy.