What is VMware Project Pacific and VMware Tanzu

vmware-tanzu-icon_enterprise

Earlier today at the VMworld keynote, VMware announced a new set of services that is powered by the latest acquisitions that VMware made (more info about this in my other blog post –> https://msandbu.org/vmware-what-does-the-future-hold-for-them/) And of course this visual shows how the different companies align within the different business strategies.

Three of the biggest announcements however from VMware so far was.

  • VMware Tanzu – Tanzu covers a suite of up and coming products and features which will be used to BuildRun and Manage modern applications on Kubernetes, on vSphere.
  • Project Pacific – Is essentially fusing vSphere with Kubernetes . It expose vSphere to developers using Kubernetes and provides a single way to manage Containers and VMs within the same platform. It does this by introducing Kubernetes Custom Resource Definitions (CRDs) for the vSphere resources.
  • VMware Tanzu Mission Control –  This new service will allow a way to  manage all Kubernetes clusters—across vSphere, VMware PKS, public clouds, managed services, packaged distributions from a single point of control. Or in simpler terms an SaaS based control plane for Kubernetes Clusters.

What are the underlying components for Pacific and how does it fit together?

Pacific introduces a new way for developers to consume resources within the datacenter. Pacific is powered by some new components that are now part of the VMware suite. First of there is a new component called Supervisor Cluster. The supervisor cluster is special kind a Kubernetes cluster, but it uses ESXi hosts as the Kubernetes worker nodes. As part of this cluster there are a few additional components, first of kubelet is now run on directly on ESXi, which is called vSpherelet.

The supervisor cluster is a Kubernetes cluster of ESXi instead of Linux

The second part is the is that VMware has added a new container runtime to ESXi called the CRX. Workloads deployed on the Supervisor, each run in their own isolated VM on the hypervisor. The CRX is like a virtual machine that includes a Linux kernel and minimal container runtime inside the guest. But this Linux kernel is coupled with the hypervisor, and this allows VMware to make a number of optimizations to effectively paravirtualized the container.

Which of course VMware has benchmarked to show the difference

The Supervisor Cluster is the main component on the nodes and provides a number of functions. Some of its primary functions are to check with the Kubernetes API server to find which Pods should be running on this node, and also report the state of its running Pods back to the API server.

The Developers can interact with the Supervisor cluster directly (Which is turn interacts with vCenter) trough a Virtual Machine operator that allows kubernetes users to manage VMs on the Supervisor. This means that we can write deployment specifications in YAML that mix container and VM workloads in a single deployment that share the same compute, network and storage resources as a source for Infrastructure as Code. The VM operator is an integration with vSphere’s existing virtual machine lifecycle services.

An example showing an combination of Kubernetes, Virtual Machines and Functions deployment

Using Kubernetes as the vSphere API

It is important to understand however that the Supervisor Cluster itself does not deliver any regular Kubernetes based clusters. If we want general purpose Kubernetes workloads, we can use Guest Clusters. Guest Clusters in vSphere use the open source Cluster API project to lifecycle manage Kubernetes clusters, which in turn uses the VM operator to manage the VMs that make up a guest. I’ve also seen some comments about where does Pivotal and PKS comes into play? You can actually run PKS as a workload using Guest Cluster inside a pacific cluster. PKS can essentially run on any cloud or on VMware native but is not dependant on Pacific. This is VMware’s flagship Kubernetes offering.

In this screenshot from one of the sessions, you can see mulitple Guest Clusters running within vCenter, defined as different namespaces (with are defined within the Supervisor)

What about Container Registry?

Harbor image registry is part of the release. We can enable the Container Registry from within vCenter. Each namespace in the supervisor gets its own project in that shared registry.

What about Tanzu Mission Control?

Mission Control is focused on management and deployment of Kubernetes clusters, regardless of where they run, from a single point of control. In this it can provide management of Kubernetes either in VM or on bare metal, in public clouds, through Kubernetes service providers such as Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Google Kubernetes Engine (GKE)

In order to manage existing Kubernetes Clusters, Mission Control essentially generates a YAML script specifically for the defined cluster and displays the kubectl command to run the script. The YAML script runs a small set of extensions to connect it with the Cluster Agent service.

It will provide the following capabilities.

  • Centralized lifecycle management for VMware Kubernetes clusters
  • Unified access management. Manage permissions and map teams’ access to clusters from a single location.
  • Cluster health and diagnostics. Monitor the ongoing health of your clusters and identify common issues that may affect their production viability.
  • Security and configuration management. Define policies that are enforced across all clusters and manage the configuration of Kubernetes clusters.
  • Cluster inspections (driven by the open source Sonobuoy project). Schedule and run routine scans to confirm your clusters and conformant and properly configured.
  • Backup and restore (driven by the open source Velero project). Configure back and recovery capabilities from a central location. Manage not only the backup of Kubernetes, but it’s associated persistent volumes.
  • Quota management and resource usage visualization. Assign and manage quotas across your clusters.

You can read more about onboarding Tanzu to setup a new or an existing Kubernetes Cluster –> https://k8s.vmware.com/tanzu-mission-control/clkn/https/pages.cloud.vmware.com/l/338801/2019-08-21/2v77rz/338801/154459/Final_4__Tanzu_Mission_Control_How_to_guide__2.pdf

So can I try it either of it yet?

Project Pacific It is still not public preview and this article will be updated when I get more information on how to get access to any preview information. Mission Control you can sign up for a demo and private preview here –> https://k8s.vmware.com/tanzu-mission-control/

You May Also Like

About the Author: Marius Sandbu

Leave a Reply

Your email address will not be published. Required fields are marked *