Monday 17 September 2018

VMware VSphere 5.5 LUN Space Reclame Unmap

VMware vSphere 5.5 Lun Space Reclaim Unmap


In my lab I have setup VMware vSphere 5.5 along with the Netapp Cluster Mode 8.2 Simulator. I have created an iSCSI network and presented a few luns to my ESXi host.


When a thin provisioned Lun gets filled with data and then some or all of that data is removed, the free space is not actually free’d from the lun or from the back-end storage, it is actually marked as free blocks ready for overwrite, but the size of the datastore or back end lun does not actually reduce in size. If you are adding and deleting quite often you could see a lot of your storage space wasted since free space is not released.


Unfortunately at this point in time there is no automatic way to do this with VMware, block storage and thin provisioning, except by way of scripting (powercli for example)


In previous version of vSphere we had to run the command vmkfstools -y 60, which would release up to 60% of free space and that percentage number is customizable, i.e. you could use 100% however you want to make sure you have ample free space available because of the way the reclaiming process works.


A pre-requisite to being able to reclaim lun space is that your ESXi hosts and backend storage must support VAAI and it must be enabled.


Within vSphere 5.5 we no longer use vmkfstools, we now use the esxcli as you will see in the tutorial/demo below.


At the end of the tutorial I have provided links from my reference material and also additional information for your reading


Adding Data to our iSCSI lun and verifying VAAI is enabled


1. In my Netapp Cluster Mode storage I have provisioned 2 luns named vmware_db2 and vmware_vswap. These 2 luns are presented to my VMware ESXi host. The lun we will be working on is vmware_vswap. As you can see the total size for this lun is 5.25GB and the available size is 5.22GB.


 

2. If we go into our vcenter client and browse to Datastores and Datastore Clusters.


 


 

3. Right click vmware_vswap and select browse datastore.


 


 

4. I have uploaded an ISO file and an EXE file about 610MB total.


 


 

5. Now I will go back to the Netapp System Manager and refresh the information for my Luns. I can see that the total space of the files has been subtracted from the total size. My available space now on the Lun is 4.63GB


 


 

6. Now we need to verify that VAAI is enabled on my ESXi host. Back in the vCenter Client select your ESXi host, click the configuration tab and under Software click Advanced Settings. In the Advanced Settings window click on DataMove and ensure the 2 options DataMover.HardwareAcceleratedInit and DataMover.HardwareAcceleratedMove is set to 1. 1 means the option is enabled and 0 means the option is disabled.


 


 

7. Also while still in Advanced settings of the host we want to make sure that VMFS3.EnableBlockDelete is disabled. This option was enabled in early versions of vSphere 5.0, but has been recommended by VMware to disable this option due to performance issues during Storage vMotion.


 


 

8. Another view of the lun space can be found by establishing an SSH session to the Netapp back end storage system and issuing the command lun show -v /vol/vmware_vswap/vmware_vswap. In this output we can see the exact used space of the lun, currently sitting at 635.7MB


 


 

9. Looking at the Datastore from within VMware we can also see the Free space has decreased.


 


 

 


Deleting Files and Reclaiming Lun Space


10. I will now establish an SSH session to my VMware ESXi host. Once authenticated I issue the command:


esxcli storage vmfs extent list


This command will enable me to see the naa device name of my datastore.


 

11. We need to check to make sure the VMware ESXi host is seeing the lun that has been presented to it as a thin provisioned lun. We can do this by typing in:


 


esxcli storage core device list -d


The other option we are also looking at here is the VAAI status output. We want to make sure this says supported.


 

12. I need to make sure the lun supports the VAAI unmap commands, to do this I type in:


 


esxcli storage core device vaai status get -d


If we see supported next to Delete Status we are happy, if we see unsupported this either means the lun is thick provisioned or that the backend storage does not support this derivative.


 

13. I will now head back to my vmware_vswap datastore


 


 

14. Now I’ll delete the ISO file and EXE file that I uploaded earlier.


 


 

15. We can see after refreshing the storage on my VMware ESXi host that the free space remains the same, 4.29GB. The free space from the deleted files has not been reclaimed.


 


 

16. If I look in the Netapp System Manager I also see that the free space has not been released.


 


 

17. Look back at my remote SSH session to the Netapp Storage, I can see the Used Size is still 635.7MB


 


 

18. Now it’s time to release these free blocks. I go back into my remote SSH session to my VMware ESXi host and type in the following:


 


esxcli storage vmfs unmap -l vmware_vswap


 

19. Refreshing the LUN information on my Netapp SAN I can see that the free space has now been released and the Available Size of the LUN is 5.22GB


 


 

20. Also checking on my remote SSH session to the Netapp SAN I can see the Used Size of the LUN is now 30.94MB


 


 

21. Monitoring VAAI unmap commands via ESXTOP can be done by establishing a remote SSH session to your VMware ESXi host, typing in ESXTOP and pressing the following keys:


 


u (for disk view)
f (for fields)
a (show Device = Device Name)
o (show VAAISTATS)


In the image below I can see 1100 hits on the DELETE column while I ran the esxcli storage vmfs unmap -l vmware_vswap command.


 

No comments:

Post a Comment