Getting Started
Transitioning From VMware
10 min
overview korgrid runs on top of a unified hyperconverged platform combining compute, storage, and network unlike vmware's modular ecosystem, our hypervisor operates as a single os with built in redundancy this guide maps vmware conceps to our platform's equivalents to ease your migration if you’re a vmware user considering a shift to korgrid, this guide will help you understand the differences in architecture, terminology, and workflows vmware’s vsphere and esxi provide a robust virtualization platform, often paired with vsan, nsx, or vcenter for storage, networking, and management korgrid, however, integrates these capabilities into a single, software defined data center operating system this document outlines the key distinctions and offers practical steps to migrate your vmware workloads prerequisites familiarity with vmware vsphere, esxi, and optionally vsan or nsx a backup of your vmware vms and configurations before migration key differences architecture aspect vmware (vsphere/esxi) korgrid core design separate hypervisor (esxi) with optional vcenter for management add ons like vsan and nsx extend functionality single os integrating virtualization, storage (vsan), and networking no separate management layer required deployment install esxi on bare metal, then configure vcenter, vsan, etc , separately os on nodes, creating a unified system from the start scalability scale compute and storage independently with additional licenses (e g , vsan) scale out with nodes (compute, storage, or both) within a single vsan instance multi tenancy limited native multi tenancy; requires vcloud director or manual segmentation built in nested multi tenancy with isolated tenants and sub tenants takeaway our platform eliminates the need for separate components like vcenter or nsx by embedding everything into one system, simplifying deployment and management terminology vmware term korgrid term notes esxi host node a physical server or vdc vcenter korgrid ui the web based ui runs on controller nodes (node 1 & 2) for system wide management cluster cluster groups of nodes with similar hardware vsan vsan vsan is integral, pooling storage across all nodes automatically datastore vsan storage tiers korgrid organizes storage into tiers within the vsan virtual switch fabric physical network korgrid presents the physical network uplinks across multiple nodes into a logical switch that is referred to as a "physical network" vm vm virtual machines are a similar concept dvportgroup fabric external network virtual networks that can represent a layer 2 network (e g a vlan) that a vm can have its vnic on can also do layer 3 services (routing, dns, dhcp, bgp/ospf, vpn) resource pool tenant tenants are isolated virtual data centers with their own resources and management takeaway while some terms overlap (e g , vm, vsan), concepts like “tenants” and “internal networks” offer more integrated and flexible options than vmware equivalents networking feature vmware korgrid networking vsphere distributed switch or standard switch; nsx for advanced features built in layer 2/3 networking with core fabric and external networks vlans configured via virtual switches configured on physical networks or internal networks redundancy nic teaming or lacp on switches core fabric networks (dedicated l2) and bonded external networks key difference korgrid runs jumbo frames (mtu 9192) on core fabric networks for vsan and node communication, unlike vmware’s optional jumbo frame support storage vmware vsan is an optional add on requiring specific licensing and configuration datastores are managed separately korgrid vsan is the default storage system, pooling all node drives into tiers no separate datastore creation is needed—storage is automatically available to vms and tenants migration tip export vmware vms as ovf/ova files management vmware vcenter provides a centralized ui, with command line options via powercli korgrid a web ui runs on controller nodes, with api access for automation takeaway korgrid's ui is more lightweight and always available, avoiding the need for a separate vcenter vm or appliance