VMware Cloud Community
gcVmWare
Enthusiast
Enthusiast

More space used after migrating from vmfs 5 to vmfs 6

Good afternoon,

We purchased new hardware for our Vrealize environment.

I was able to add the new hosts to the current vcenter. Old hosts were removed from the cluster.

I connected the the old storage Dell MD3820 and the new ME4024 storage to the hosts.

The old storage is a vmfs 5 21Tb 

The new storage vmfs 6 30Tb

The old storage was 95 percent full.  Then i started migrating vm;s but now i get errors saying the datastore is almost full.

My data has grown from 20Tb to more then 30Tb and i cannot understand what went wrong.

I contacted the supplier from the storage. In the webinterface the data is displayed correct with enough space left in the pool.

 

What am i doing wrong?

 

Also contacted support for this but thought it would be handy to ask it overhere as well.

 

 

Reply
0 Kudos
7 Replies
a_p_
Leadership
Leadership

That sounds unusual.
You are mixing total capacities, pool sizes, and LUN/datastore sizes in what you explain.

Please provide a clear picture of what you have on the old, and the new storage system, i.e.

  • pool sizes (total/consumed)
  • LUN (datastore) sizes (total capacity/free)
  • LUN provisioning (thin/thick)
  • LUN snapshots enabled?
  • VM provisioning (thin/thick)

André

Reply
0 Kudos
gcVmWare
Enthusiast
Enthusiast

Hello Andre,

I try to write it down more organised

Thanks for the reply

Old storage Dell MD3820F 20 Tb

Capacity 20,75 Tb. Used 17,81Tb 

This used to be enough to hold all vm's in this environment.

Then i started migrating to the new storage EMC M4024 30Tb

I created a virtual pool 33,7Tb . The web interface tells me there is 6521Gb allocated.

When i look in vmware it is 15,31Tb.

Both lun's are thin provisioned. 

VM provisioning is is both thick and thin. But all machines where migrated with the current settings.

 

So concluding. The 20 Tb on the old storage turns into 32Tb on the new one.

Reply
0 Kudos
a_p_
Leadership
Leadership

Still some questions:

>>> I created a virtual pool 33,7Tb . The web interface tells me there is 6521Gb allocated.
Sounds ok -> used storage
As a side note: Do you have a single pool on the ME4? I'm asking because in case of a single pool, only one controller will be active, and the other one will only take over in case the owning controller goes down. Basically a waste of controller resources.

>> When i look in vmware it is 15,31Tb.
Sounds ok too -> provisioned storage

>>> Both lun's are thin provisioned. 
Are these two LUNs used as datastores, which have been created on the pool?

>>> VM provisioning is is both thick and thin. But all machines where migrated with the current settings.
Sounds ok -> should't make a big difference for the phyiscal disk space usage.

>>> So concluding. The 20 Tb on the old storage turns into 32Tb on the new one.
I can't follow you on this. Where exactly do you see this usage?

André

Reply
0 Kudos
gcVmWare
Enthusiast
Enthusiast

Here are some answers

>>> I created a virtual pool 33,7Tb . The web interface tells me there is 6521Gb allocated.
Sounds ok -> used storage
As a side note: Do you have a single pool on the ME4? I'm asking because in case of a single pool, only one controller will be active, and the other one will only take over in case the owning controller goes down. Basically a waste of controller resources.

>> Yes we created one pool that contains 2 volumes

 

 

 

>> When i look in vmware it is 15,31Tb.
Sounds ok too -> provisioned storage

>>> Both lun's are thin provisioned. 
Are these two LUNs used as datastores, which have been created on the pool?

 

>> Yes they are. But datastores on different hardware and different pools

>>> VM provisioning is is both thick and thin. But all machines where migrated with the current settings.
Sounds ok -> should't make a big difference for the phyiscal disk space usage.

>>> So concluding. The 20 Tb on the old storage turns into 32Tb on the new one.
I can't follow you on this. Where exactly do you see this usage?

In the old situation all the data i had fitted on the old 20Tb storage. It was overbooked because of thin provisioning but it worked

Now i have on the old storage as you can see in the attached files around 18Tb of data and on the new one 16Tb of data. So a total of 33/34 Tb

 

 

Reply
0 Kudos
a_p_
Leadership
Leadership

From what I understood, you migrated all data over to the new storage?

If so, I assume that the old MD3820 simply doesn't support unmap, i.e. isn't aware that the previously provisioned disk space is not in use anymore.
Do you see any left over files/folders with that much data on the old staorage system?

André

Reply
0 Kudos
gcVmWare
Enthusiast
Enthusiast

Not all data is migrated because i run out of space on the new datastore.

I will check the datastores for left over files.

 

Reply
0 Kudos
gcVmWare
Enthusiast
Enthusiast

VMWare support couldn't find anything either. I started migrating all the machines and converted them to thin .

Also changed all the vra blueprints and templates so new machines do have thinprovisioning on the disks.

This helps but is still don't understand why all machines fiited on a 20Tb datastore do not fit on an 30Tb datastore when migrated 

Reply
0 Kudos