Knowledge Base
Storage
vSAN Deletion Process
8 min
block level architecture foundation korgrid's vsan operates on a block level architecture where vm and tenant disks are divided into multiple blocks each block receives a unique cryptographic hash blocks are distributed across nodes using hash based algorithms both primary and redundant copies are maintained tenants operate as lxc containers with their own storage allocations within the parent vsan how deletion works reference counting system when you delete a vm, drive, or tenant the system doesn't immediately delete the actual data blocks instead, it removes the references to those blocks from the hash map each block maintains reference counts tracking how many objects use it tenant storage follows the same reference counting as individual vms, but operates within lxc container boundaries deduplication impact since korgrid uses block level deduplication multiple vms may share identical blocks (same hash) tenant storage can share blocks with parent system or other tenants deleting one vm or tenant only decrements the reference count blocks are only marked for deletion when reference count reaches zero garbage collection process the actual deletion happens through background processes vsan walk the system periodically scans for unreferenced blocks blocks with zero references are marked for reclamation physical storage space is then freed and made available tenant deletions trigger the same garbage collection as vm deletions immediate vs actual reclamation immediate ui shows space as "freed" immediately actual physical space reclamation happens during background vsan operations this is why you might not see storage space decrease immediately after deletion drive / vm deletion when you delete a vm or drive references are removed from the system hash map entries are updated background processes handle actual block cleanup snapshots and deletion deleting a vm also deletes its vm snapshots however, the vm remains in system snapshots taken while it existed tenant deletion scenarios when deleting a tenant all tenant vms, drives, and metadata references are removed tenant storage tiers are dereferenced from the parent vsan lxc container filesystem and allocated storage are cleaned up hash map entries for all tenant blocks are updated background processes handle block cleanup across all tenant data key considerations vm/drive deletion within tenants when deleting vms or drives inside a tenant references are removed from tenant's local hash map parent vsan hash map entries are also updated background processes handle actual block cleanup tenant vs parent vsan relationship tenants operate as lxc containers within the parent vsan they don't have separate vsans tenant storage is allocated from parent vsan tiers through container filesystem layers block deduplication works across tenant boundaries and between containers parent system manages all physical storage cleanup for tenant containers snapshots and tenant deletion deleting a tenant also deletes its local vm snapshots tenant remains in parent system snapshots taken while it existed system snapshots can prevent immediate storage reclamation tenant can be restored from system snapshots even after deletion shared objects and file sharing files shared between parent and tenant may maintain references shared vm snapshots can prevent complete storage cleanup files provided to tenants create additional block references consider shared objects when estimating storage reclamation when a tenant is deleted, associated network resources are automatically cleaned up ip addresses assigned to tenant vms and networks are released back to the pool network blocks (subnets) allocated to the tenant are unassigned and returned to available inventory network interfaces and routing configurations are automatically removed dns entries and network policies associated with the tenant are cleaned up advanced scenarios nested tenant deletion for tenants that host their own sub tenants sub tenants operate as nested lxc containers sub tenant deletion follows same reference counting within container hierarchy parent tenant container manages sub tenant storage cleanup multiple layers of containerization and reference counting may apply cleanup processes work from innermost to outermost container tenant restore impact on deletion restoring deleted tenants from system snapshots recreates references previously "deleted" blocks may become active again storage usage may increase when restoring tenants background cleanup processes adapt to restored references monitoring parent system monitoring storage dashboard for overall tier utilization vsan diagnostics for background operation status system logs for tenant deletion and cleanup details tenant statistics showing storage consumption trends tenant level monitoring (before deletion) tenant dashboard for internal storage usage tenant history for consumption statistics internal vsan statistics within tenant environment post deletion verification storage tier utilization should decrease over time vsan walk statistics show cleanup progress reference count verification through vsan diagnostics