what is the lowest cost san out there? looking to provide san probbaly nfs or iscsi for 2 node cluster remote office.
trying to keep costs under 3k.
any idea?
Hello.
Note: This discussion was moved from the VMware ESXi™ 4 community to the VMware vSphere™ Storage community.
The cheapest, as in lowest-cost, units on the HCL will probably be the Iomega or Buffalo devices.
Good Luck!
IOmega PX6 is pretty good - using one for demos right now.
(disclaimer, I work for EMC, the parent company of iomega).
They're also clearancing out their ix12 series right now to make room for the px12 and they're going for around 3k I think which would also give you greater flexibility in the long haul to add more disk drivers later on if you need to expand or need added disk spindles for performance.
Drobo makes a pretty intresting solution with their PRO device
If you want low end enterprise grade equallogic has their PS4000 lines which are very affordable for what you get
If you are looking for something REALLY cheap you could always do a virtual or physical based on starwind or open-e (or any number of other software SAN's)
Alternitivly vsphere 5 is offering the vmware storage appliance http://www.vmware.com/products/datacenter-virtualization/vsphere/vsphere-storage-appliance/overview....
For a very basic affordable software based sollution
You won't get a supported SAN for 3k I think. If anyone knows of one I am very keen to hear about it!
For 5k you can buy the VMware Virtual Storage Appliance. This will leverage the disks across both your hosts and create a supported storage device for use on the hosts. Further reading at;
You can only have one per vCenter server so if you needed more then one branch office the setup would be to have multiple vCenter servers in linked mode where you have a vCenter server at the branch managing the branch hosts and then the main vCenter server back in the datacenter.
Regards,
Paul
The iomega boxes I mentioned above are supported, and under 3K.
Hi,
I also had good experiences using Promise VessRaid, it's an iscsi san storage that uses commercial sata disks, and has 4 1 gbit eth ports. It's listed in the VMware HCL and you can find it online for less than 3000 USD, disk excluded.
Obviously you have to accept the fact it only has one storage controller, but for this range of price is pretty good.
Regards,
Luca.
--
Luca Dell'Oca
@dellock6
vExpert 2011
http://www.vuemuer.it
[rewarding points to a useful answer is a way to say thanks]
Myself, I have been disappointed with our PDX6 - 300d. I recently had a rebuild kick off on the one of our and lost all connectivity to it.
After talking to the technical support person (both tier1 and tier 2), and asking for confirmation in the user forum, all agree the unit is in-accesible during a rebuild. I had a hot spare configured and a Raid(5+1) with 6(2TB) drives.
http://www.iomegasupportforums.com/phpbb2/viewtopic.php?t=42588
I waited about 9 hours for the rebuild to complete, but I think with that much data, rebuilding the array just takes too long.
I still question if this is the design or a bug. The two tech support folks said that is the way it is, however there is no documentation on this issue, where as the forum person just confirmed that this "may happen".
Fortunate for me, this is a disk-to-disk backup target.
Interesting - my px6 didn't lose connectivity during a rebuild at all. Granted, it got pretty slow, but it was accessible.
Figured I would update my thread as some time has passed and I have used the PDX6/NFS mode as a backup device for over 4 months with VMWare, primarily using Veeam backup software. I have not had any re-occurences of the lost volume. The only issues I seem to have is re-connecting the NFS volumes after a power down of our environment. It could also be related to using the 802.3ad etherchannel connection (or both).
I only osbserved this re-mount issue when powering down our environment during testing of our power outage shutdown script, which when you are completely virtual has all types of nuances when starting up your environment again.
Since my original post I have:
1) Upgrade the firmware
2) Maxed out the drive capacity, for spreading the IO load accross more drives.
3) Converted to RAID6 as we have some development on the NAS too.