There is a storage DELL SC3020. A LUN was created on it to store virtual machines. The storage made the snapshots, after which the LUN was full and the virtual machine stopped with the error: "msg.hbacommon.outofspace: "/ vmfs / volumes / 5c4b18ea-39e14e84-39e7-f4e9d4cf4810 / AD / AD -000002.vmdk". Click Cancel to terminate this session " . VCENTER did not allow to increase LUN, it was necessary through ESXI. Is there any idea to use snapshots on the DELL SC3020 device?
I don’t make a copy of the entire virtual machine using the BACKUP EXE tool yet because I haven’t purchased this software yet. So far, I am only doing agents that are located inside the virtual machine.
While there is no other storage to keep copies of virtual machines. There is a lto 6 streamer connected to a physical PC. Is it possible to add copies of virtual machines there?
Dell has updated SC to version 22.214.171.124. But he said double redundancy is better to do for disks more than 3 TB. The system began to work faster. I wanted to clarify the size of my repository. Disks 18 on 1.8tb, one HotSpare. The total amount of 16.6 TB? Why then appears that the disks are occupied by half?
Your total disk capacity from last picture "Total Space = 29.47TB"
Total space used from first picture "Used = 14.28TB"
14.28 / 29.47 = ~48%
Added to the 14.28 would be any snapshot overheads which may make up most of the difference.
It's not precise like that. The Dell SC will use what disk space it needs to store the overheads, like snapshots, also remember the actual disk space will change also depending on the ratio of data in RAID-10 and RAID-5. The recommended amount of disk space to support the LUNs you want is part of the service offered by Dell. The formulas they use seem to be a closely guarded secret, so the quantity and size of LUNs you could fit onto your 30TB of disks is anyone's guess.
We went from defining our LUN sizes and performance requirements and Dell told us what disk types, sizes and quantity we needed. We didn't work things out the other way around. Sorry.
You dont have a dedicated Hotspare anymore You have a distributed Hotspare. Based on the used capacity of the disks the system reserved a portion of space on ~18 disks. You have received the increased performance of 1/18.
My calculator returns ~17.3TB of usable space based on 18x1.8TB on double Redundancy an 20% Raid10DM and 80% Raid6. We perform the sizing before we sell a system to a customer.
The SC doesnt convert form Singe to Double Redundancy silency under the hood. From my point of view every Blocks is handled like a new block written by a Host which means without changing the StoragePolicy every blocks goes into RAID10DM and than convert to RAID6. We do it once for a cutomer with around 40% used and at the end of the conversion we was on ~80. After some days and running the dataprogression system was down to ~60/65%.
You already have stored around 14.x TB so i thing there isnt enough capacity left for the online conversion. You should ask Dell if they can provide the math and confirm if you can do it.
But again.. there is nothing wrong with RAID5 and one spare. If you have configured SCOS and DSM DataCollector right and it'll send you an e-mail when an disk failure occured and you contacted dell as fast as possible, and Support Assist is configured as well you should get the replacement in 4h. You have the 4h MC support contract or just a simple NBD?
Your system is a single redundant system so uses RAID-10 and RAID-5-9.Writes will normally be done to RAID-10, which is 50% space efficient, i.e. it uses twice as much disk space to write a given amount of data.Data progression will move cold data to a lower tier and/or RAID-5, which is 80% efficient for RAID-5-5 or 88% for RAID-5-9 (as you have it), so yes there is an advantage for data being allowed to progress from RAID-10 to RAID-5.
No, we do not have a 4h MC support contract! But we have in stock 3 disks for this SC. And at failure, we will make replacement independently.
It turns out with a given number of disks and to get more data I better use Rai5?
But still it is not clear how much data I can write to this storage?
With Raid 5 - Tb?
With Raid 6 - Tb?
When writing to RAID-5-5, 1TB of data will occupy 1.25TB of disk space, i.e. 80% efficient.
When writing to RAID-5-9 (as your system does), 1TB of data will occupy 1.125TB of disk space, i.e. 89% efficient.
When writing to RAID-10 (as your system does), 1TB of data will occupy 2TB of disk space, i.e. 50% efficient.
When writing to RAID-10DM, 1TB of data will occupy 3TB of disk space, i.e. 33% efficient.
Your storage usage shows that you are roughly at the assumed ratio of 20:80 (RAID-10:RAID-5) for calculation purposes.
So for your 27TB you might expect to be able to store roughly 20TB of data in ideal conditions, but I personally wouldn't attempt to go that far. I don't think I would want to create more than 15TBs worth of Volumes.
If you switch to dual redundancy, the amount drops to around 16TB, of which create more than 12TB of volumes would make me nervous.
Thank you, now everything is clear in size. We will plan the purchase of disks to increase the volume. And the last question, is there any idea to transfer all ESXI to Cluster? If ESXI is connected via SAS.
It turns out I will receive fault tolerance. If what happens to the channel between the server and the storage system, then the server will go to the storage system through the other server? now so connected.
Yes you can easily move the Server Objects into the Cluster Object.... just select the Server and right click "Move to Cluster" or so. The Server Cluster is only a way to inherit some Settings and mostly the Volume ACL to a group of Servers instead one by one.
Edit: You have to check why your ESX1 have one 4 Volumes and the others only 2. To get VMware HA, vMotion, FT working correctily all Hosts needs access to all Volumes(ESXi Datastores).
Was that you with that 18 HDDs? Consider to buy some SSD as Tier1 and aditional HDDs and check if you already bought the DataProgression licence which will transfer the data blocks between Tiers. I dont know your change ratio but standard setup is up to 20-25% SSD(Tier1) vs. 75-50HDD (Tier3).