Kubebuilder: Extending the k8s API to match applications' needs

Kubernetes is the de-facto platform to automate the management, deployment, and scaling of containerized applications. One of its strongest features is its ability to scale, allowing users to customize the system according to their needs. A key mechanism for this extensibility is Custom Resource Definitions (CRD).

What are CRDs?

CRDs allow Kubernetes users to create and manage their own custom resources. A Custom Resource (CR) is essentially a Kubernetes resource that does not exist by default but is defined via a CRD.

In other words, CRDs allow the creation of new resource types that Kubernetes doesn’t know about a priori. These resources behave like Kubernetes’ built-in resources (such as Pods, Services, Deployments), but their structure and functionality are determined by the CRD you create.

How do CRDs Work?

When we define a CRD, Kubernetes extends its API to handle the new resource type. Once a CRD is registered with the Kubernetes API Server, we can create instances of the custom resource just like we would with native resources. The API server will store these objects in Kubernetes’ distributed key-value store, etcd, and manage their lifecycle.

Figure 1: Generic K8s Operator
Figure 1: Generic K8s Operator

For example, after defining a MyApp CRD, we can create MyApp resources in our cluster, just like we would create Pods or Deployments. There are tools available to facilitate the definition, setup, and management of CRDs and their accompanying components. Such a tool is KubeBuilder.

What is KubeBuilder?

Kubebuilder is a powerful framework that makes it easy to build and manage Kubernetes operators and APIs. Developed by the Kubernetes team at Google, it is widely recognized as a popular tool among developers working with Kubernetes.

Basic features

  1. Building Operators: Kubebuilder provides a simple and efficient process for building, deploying, and testing Kubernetes operators.
  2. Building APIs: With Kubebuilder, users can build custom Kubernetes APIs and resources, allowing them to extend Kubernetes with their own functions and resources.
  3. Based on Go: Kubebuilder is written in Go and uses Kubernetes’ Go SDK, providing seamless integration with the Go ecosystem.

Now let’s dive into the process of creating our first custom Kubernetes controller using KubeBuilder.

Creating Custom Kubernetes Controller with KubeBuilder

To get hands-on with KubeBuilder, we’ll create a custom resource called MyApp. This resource represents a simple application comprising multiple pods. We’ll also build a Kubernetes controller to manage the lifecycle of these MyApp instances, ensuring the desired state of your application is maintained within the cluster.

Prerequisites

  • Go version v1.20.0+
  • Docker version 17.03+.
  • Access to a Kubernetes v1.11.3+ cluster.

Installation

Let’s install KubeBuilder:

curl -L -o kubebuilder "https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)"
chmod +x kubebuilder && sudo mv kubebuilder /usr/local/bin/

Create a project

First, let’s create and navigate into a directory for our project. Then, we’ll initialize it using KubeBuilder:

export GOPATH=$PWD
echo $GOPATH
mkdir $GOPATH/operator
cd $GOPATH/operator
go mod init nbfc.io
kubebuilder init --domain=nbfc.io

Create a new API

# kubebuilder create api --group application --version v1alpha1 --kind MyApp --image=ubuntu:latest --image-container-command="sleep,infinity" --image-container-port="22" --run-as-user="1001" --plugins="deploy-image/v1-alpha" --make=false

This command creates a new Kubernetes API with these parameters:

  • –group application: Defines the API group name.
  • –version v1alpha1: The API version.
  • –kind MyApp: The name of custom resource.
  • –image=ubuntu:latest: The default container image to use
  • –image-container-port=“22”: Exposes port 22 on the container
  • –run-as-user=“1001”: Sets the user ID for running the container
  • –plugins=“deploy-image/v1-alpha”: Uses the deploy-image plugin

We use these parameters to create a custom resource that includes all necessary deployment settings — such as container image, port, and run-as-user — making it more manageable and production-ready within our Kubernetes environment. By using the deploy-image plugin, we achieve optimal customization for our CR without needing further deployment modifications.

Install the CRDs into the cluster

make install

For quick feedback and code-level debugging, let’s run our controller:

make run

Create an Application CRD with a custom controller

In this section we build an application description as a CRD, along with its accompanying controller that performs simple operations upon spawning. The application consists of a couple of pods and the controller creates services according to the exposed container ports. The rationale is to showcase how easy it is to define logic that accompanies the spawning of pods.

To create custom pods using the controller, we need to modify the following files:

  • myapp_controller.go
  • myapp_controller_test.go
  • application_v1alpha1_myapp.yaml
  • myapp_types.go
.
|-- Dockerfile
|-- Makefile
|-- PROJECT
|-- README.md
|-- api
|   `-- v1alpha1
|       |-- groupversion_info.go
|       `-- myapp_types.go
|-- bin
|   |-- controller-gen -> /root/operator/bin/controller-gen-v0.16.1
|   |-- controller-gen-v0.16.1
|   |-- kustomize -> /root/operator/bin/kustomize-v5.4.3
|   `-- kustomize-v5.4.3
|-- cmd
|   `-- main.go
|-- config
|   |-- crd
|   |   |-- bases
|   |   |   `-- application.nbfc.io_myapps.yaml
|   |   |-- kustomization.yaml
|   |   `-- kustomizeconfig.yaml
|   |-- default
|   |   |-- kustomization.yaml
|   |   |-- manager_metrics_patch.yaml
|   |   `-- metrics_service.yaml
|   |-- manager
|   |   |-- kustomization.yaml
|   |   `-- manager.yaml
|   |-- network-policy
|   |   |-- allow-metrics-traffic.yaml
|   |   `-- kustomization.yaml
|   |-- prometheus
|   |   |-- kustomization.yaml
|   |   `-- monitor.yaml
|   |-- rbac
|   |   |-- kustomization.yaml
|   |   |-- leader_election_role.yaml
|   |   |-- leader_election_role_binding.yaml
|   |   |-- metrics_auth_role.yaml
|   |   |-- metrics_auth_role_binding.yaml
|   |   |-- metrics_reader_role.yaml
|   |   |-- myapp_editor_role.yaml
|   |   |-- myapp_viewer_role.yaml
|   |   |-- role.yaml
|   |   |-- role_binding.yaml
|   |   `-- service_account.yaml
|   `-- samples
|       |-- application_v1alpha1_myapp.yaml
|       `-- kustomization.yaml
|-- go.mod
|-- go.sum
|-- hack
|   `-- boilerplate.go.txt
|-- internal
|   `-- controller
|       |-- myapp_controller.go
|       |-- myapp_controller_test.go
|       `-- suite_test.go
`-- test
    |-- e2e
    |   |-- e2e_suite_test.go
    |   `-- e2e_test.go
    `-- utils
        `-- utils.go

Modify myapp_controller.go

This file contains the core logic of our custom controller, where we implement the reconciliation logic to manage the lifecycle of MyApp resources. Specifically, the controller logic:

  • Monitors MyApp resources for any changes
  • Creates, updates, or deletes Pods based on the MyApp specifications
  • Manages Services to ensure connectivity for the Pods
  • Handles resource cleanup when MyApp resources are deleted
package controller

import (
        "context"
        "fmt"

        corev1 "k8s.io/api/core/v1"
        apierrors "k8s.io/apimachinery/pkg/api/errors"
        "k8s.io/apimachinery/pkg/api/meta"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/runtime"
        "k8s.io/apimachinery/pkg/types"
        "k8s.io/apimachinery/pkg/util/intstr"
        "k8s.io/client-go/tools/record"
        applicationv1alpha1 "nbfc.io/api/v1alpha1"
        ctrl "sigs.k8s.io/controller-runtime"
        "sigs.k8s.io/controller-runtime/pkg/client"
        "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
        "sigs.k8s.io/controller-runtime/pkg/log"
)

const myappFinalizer = "application.nbfc.io/finalizer"

// Definitions to manage status conditions
const (
        typeAvailableMyApp = "Available"
        typeDegradedMyApp  = "Degraded"
)

// MyAppReconciler reconciles a MyApp object
type MyAppReconciler struct {
        client.Client
        Scheme   *runtime.Scheme
        Recorder record.EventRecorder
}

//+kubebuilder:rbac:groups=application.nbfc.io,resources=myapps,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=application.nbfc.io,resources=myapps/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=application.nbfc.io,resources=myapps/finalizers,verbs=update
//+kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
//+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete

// Reconcile handles the main logic for creating/updating resources based on the MyApp CR
func (r *MyAppReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
        log := log.FromContext(ctx)
         // Fetch the MyApp resource
         myapp := &applicationv1alpha1.MyApp{}
        err := r.Get(ctx, req.NamespacedName, myapp)
        if err != nil {
                if apierrors.IsNotFound(err) {
                        log.Info("myapp resource not found. Ignoring since object must be deleted")
                        return ctrl.Result{}, nil
                }
                log.Error(err, "Failed to get myapp")
                return ctrl.Result{}, err
        }
        // Initialize status conditions if none exist
        if myapp.Status.Conditions == nil || len(myapp.Status.Conditions) == 0 {
                meta.SetStatusCondition(&myapp.Status.Conditions, metav1.Condition{Type: typeAvailableMyApp, Status: metav1.ConditionUnknown, Reason: "Reconciling", Message: "Starting reconciliation"})
                if err = r.Status().Update(ctx, myapp); err != nil {
                        log.Error(err, "Failed to update MyApp status")
                        return ctrl.Result{}, err
                }

                if err := r.Get(ctx, req.NamespacedName, myapp); err != nil {
                        log.Error(err, "Failed to re-fetch myapp")
                        return ctrl.Result{}, err
                }
        }
        // Add finalizer for cleanup during deletion
        if !controllerutil.ContainsFinalizer(myapp, myappFinalizer) {
                log.Info("Adding Finalizer for MyApp")
                if ok := controllerutil.AddFinalizer(myapp, myappFinalizer); !ok {
                        log.Error(err, "Failed to add finalizer into the custom resource")
                        return ctrl.Result{Requeue: true}, nil
                }

                if err = r.Update(ctx, myapp); err != nil {
                        log.Error(err, "Failed to update custom resource to add finalizer")
                        return ctrl.Result{}, err
                }
        }
        // Handle cleanup if MyApp resource is marked for deletion
        isMyAppMarkedToBeDeleted := myapp.GetDeletionTimestamp() != nil
        if isMyAppMarkedToBeDeleted {
                if controllerutil.ContainsFinalizer(myapp, myappFinalizer) {
                        log.Info("Performing Finalizer Operations for MyApp before delete CR")

                        meta.SetStatusCondition(&myapp.Status.Conditions, metav1.Condition{Type: typeDegradedMyApp,
                                Status: metav1.ConditionUnknown, Reason: "Finalizing",
                                Message: fmt.Sprintf("Performing finalizer operations for the custom resource: %s ", myapp.Name)})

                        if err := r.Status().Update(ctx, myapp); err != nil {
                                log.Error(err, "Failed to update MyApp status")
                                return ctrl.Result{}, err
                        }

                        r.doFinalizerOperationsForMyApp(myapp)

                        if err := r.Get(ctx, req.NamespacedName, myapp); err != nil {
                                log.Error(err, "Failed to re-fetch myapp")
                                return ctrl.Result{}, err
                        }

                        meta.SetStatusCondition(&myapp.Status.Conditions, metav1.Condition{Type: typeDegradedMyApp,
                                Status: metav1.ConditionTrue, Reason: "Finalizing",
                                Message: fmt.Sprintf("Finalizer operations for custom resource %s name were successfully accomplished", myapp.Name)})

                        if err := r.Status().Update(ctx, myapp); err != nil {
                                log.Error(err, "Failed to update MyApp status")
                                return ctrl.Result{}, err
                        }

                        log.Info("Removing Finalizer for MyApp after successfully perform the operations")
                        if ok := controllerutil.RemoveFinalizer(myapp, myappFinalizer); !ok {
                                log.Error(err, "Failed to remove finalizer for MyApp")
                                return ctrl.Result{Requeue: true}, nil
                        }

                        if err := r.Update(ctx, myapp); err != nil {
                                log.Error(err, "Failed to remove finalizer for MyApp")
                                return ctrl.Result{}, err
                        }
                }
                return ctrl.Result{}, nil
        }
        // List all Pods matching the MyApp resource
        podList := &corev1.PodList{}
        listOpts := []client.ListOption{
                client.InNamespace(myapp.Namespace),
                client.MatchingLabels(labelsForMyApp(myapp.Name)),
        }
        if err = r.List(ctx, podList, listOpts...); err != nil {
                log.Error(err, "Failed to list pods", "MyApp.Namespace", myapp.Namespace, "MyApp.Name", myapp.Name)
                return ctrl.Result{}, err
        }
        podNames := getPodNames(podList.Items)

        if !equalSlices(podNames, myapp.Status.Nodes) {
                myapp.Status.Nodes = podNames
                if err := r.Status().Update(ctx, myapp); err != nil {
                        log.Error(err, "Failed to update MyApp status")
                        return ctrl.Result{}, err
                }
        }
          // Create or update pods and services based on MyApp spec
        for _, podSpec := range myapp.Spec.Pods {
                foundPod := &corev1.Pod{}
                err = r.Get(ctx, types.NamespacedName{Name: podSpec.Name, Namespace: myapp.Namespace}, foundPod)
                if err != nil && apierrors.IsNotFound(err) {
                        pod := &corev1.Pod{
                                ObjectMeta: metav1.ObjectMeta{
                                        Name:      podSpec.Name,
                                        Namespace: myapp.Namespace,
                                        Labels:    labelsForMyApp(myapp.Name),
                                },
                                Spec: corev1.PodSpec{
                                        Containers: []corev1.Container{{
                                                Name:    podSpec.Name,
                                                Image:   podSpec.Image,
                                                Command: podSpec.Command,
                                                Args:    podSpec.Args,
                                                Ports:   podSpec.ContainerPorts,
                                        }},
                                },
                        }

                        if err := ctrl.SetControllerReference(myapp, pod, r.Scheme); err != nil {
                                log.Error(err, "Failed to set controller reference", "Pod.Namespace", pod.Namespace, "Pod.Name", pod.Name)
                                return ctrl.Result{}, err
                        }

                        log.Info("Creating a new Pod", "Pod.Namespace", pod.Namespace, "Pod.Name", pod.Name)
                        if err = r.Create(ctx, pod); err != nil {
                                log.Error(err, "Failed to create new Pod", "Pod.Namespace", pod.Namespace, "Pod.Name", pod.Name)
                                return ctrl.Result{}, err
                        }
                          // Refetch MyApp after creating the pod
                        if err := r.Get(ctx, req.NamespacedName, myapp); err != nil {
                                log.Error(err, "Failed to re-fetch myapp")
                                return ctrl.Result{}, err
                        }
                        // Update status to reflect successful pod creation
                        meta.SetStatusCondition(&myapp.Status.Conditions, metav1.Condition{Type: typeAvailableMyApp,
                                Status: metav1.ConditionTrue, Reason: "Reconciling",
                                Message: fmt.Sprintf("Pod %s for custom resource %s created successfully", pod.Name, myapp.Name)})

                        if err := r.Status().Update(ctx, myapp); err != nil {
                                log.Error(err, "Failed to update MyApp status")
                                return ctrl.Result{}, err
                        }

                        // Check if the pod exposes any ports and create Services
                        for _, containerPort := range podSpec.ContainerPorts {
                                serviceName := fmt.Sprintf("%s-service", podSpec.Name) // Set a fixed name for the Service
                                service := &corev1.Service{
                                        ObjectMeta: metav1.ObjectMeta{
                                                Name:      serviceName,
                                                Namespace: myapp.Namespace,
                                                Labels:    labelsForMyApp(myapp.Name),
                                        },
                                        Spec: corev1.ServiceSpec{
                                                Selector: labelsForMyApp(myapp.Name),
                                                Ports: []corev1.ServicePort{{
                                                        Name:       containerPort.Name,
                                                        Port:       containerPort.ContainerPort,
                                                        TargetPort: intstr.FromInt(int(containerPort.ContainerPort)),
                                                }},
                                        },
                                }

                                if err := ctrl.SetControllerReference(myapp, service, r.Scheme); err != nil {
                                        log.Error(err, "Failed to set controller reference", "Service.Namespace", service.Namespace, "Service.Name", service.Name)
                                        return ctrl.Result{}, err
                                }

                                log.Info("Creating a new Service", "Service.Namespace", service.Namespace, "Service.Name", service.Name)
                                if err = r.Create(ctx, service); err != nil {
                                        log.Error(err, "Failed to create new Service", "Service.Namespace", service.Namespace, "Service.Name", service.Name)
                                        return ctrl.Result{}, err
                                }
                        }
                } else if err != nil {
                        log.Error(err, "Failed to get Pod")
                        return ctrl.Result{}, err
                }
        }

        return ctrl.Result{}, nil
}

func (r *MyAppReconciler) doFinalizerOperationsForMyApp(cr *applicationv1alpha1.MyApp) {
        // Add the cleanup steps that the finalizer should perform here
        log := log.FromContext(context.Background())
        log.Info("Successfully finalized custom resource")
}

func (r *MyAppReconciler) SetupWithManager(mgr ctrl.Manager) error {
        return ctrl.NewControllerManagedBy(mgr).
                For(&applicationv1alpha1.MyApp{}).
                Owns(&corev1.Pod{}).
                Owns(&corev1.Service{}). // Ensure the controller watches Services
                Complete(r)
}

func labelsForMyApp(name string) map[string]string {
        return map[string]string{"app": "myapp", "myapp_cr": name}
}

func getPodNames(pods []corev1.Pod) []string {
        var podNames []string
        for _, pod := range pods {
                podNames = append(podNames, pod.Name)
        }
        return podNames
}

func equalSlices(a, b []string) bool {
        if len(a) != len(b) {
                return false
        }
        for i := range a {
                if a[i] != b[i] {
                        return false
                }
        }
        return true
}

Modify myapp_controller_test.go

We edit myapp_controller_test.go and we add test cases for the reconciliation logic.

package controller

import (
        "context"
        "fmt"
        "os"
        "time"

        //nolint:golint
        . "github.com/onsi/ginkgo/v2"
        . "github.com/onsi/gomega"
        appsv1 "k8s.io/api/apps/v1"
        corev1 "k8s.io/api/core/v1"
        "k8s.io/apimachinery/pkg/api/errors"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/types"
        "sigs.k8s.io/controller-runtime/pkg/reconcile"

        applicationv1alpha1 "nbfc.io/api/v1alpha1"
)

var _ = Describe("MyApp controller", func() {
        Context("MyApp controller test", func() {

                const MyAppName = "test-myapp"

                ctx := context.Background()

                namespace := &corev1.Namespace{
                        ObjectMeta: metav1.ObjectMeta{
                                Name:      MyAppName,
                                Namespace: MyAppName,
                        },
                }

                typeNamespaceName := types.NamespacedName{
                        Name:      MyAppName,
                        Namespace: MyAppName,
                }
                myapp := &applicationv1alpha1.MyApp{}

                BeforeEach(func() {
                        By("Creating the Namespace to perform the tests")
                        err := k8sClient.Create(ctx, namespace)
                        Expect(err).To(Not(HaveOccurred()))

                        By("Setting the Image ENV VAR which stores the Operand image")
                        err = os.Setenv("MYAPP_IMAGE", "example.com/image:test")
                        Expect(err).To(Not(HaveOccurred()))

                        By("creating the custom resource for the Kind MyApp")
                        err = k8sClient.Get(ctx, typeNamespaceName, myapp)
                        if err != nil && errors.IsNotFound(err) {
                                // Let's mock our custom resource at the same way that we would
                                // apply on the cluster the manifest under config/samples
                                myapp := &applicationv1alpha1.MyApp{
                                        ObjectMeta: metav1.ObjectMeta{
                                                Name:      MyAppName,
                                                Namespace: namespace.Name,
                                        },
                                        Spec: applicationv1alpha1.MyAppSpec{
                                                Size:          1,
                                                ContainerPort: 22,
                                        },
                                }

                                err = k8sClient.Create(ctx, myapp)
                                Expect(err).To(Not(HaveOccurred()))
                        }
                })

                AfterEach(func() {
                        By("removing the custom resource for the Kind MyApp")
                        found := &applicationv1alpha1.MyApp{}
                        err := k8sClient.Get(ctx, typeNamespaceName, found)
                        Expect(err).To(Not(HaveOccurred()))

                        Eventually(func() error {
                                return k8sClient.Delete(context.TODO(), found)
                        }, 2*time.Minute, time.Second).Should(Succeed())

                        // TODO(user): Attention if you improve this code by adding other context test you MUST
                        // be aware of the current delete namespace limitations.
                        // More info: https://book.kubebuilder.io/reference/envtest.html#testing-considerations
                        By("Deleting the Namespace to perform the tests")
                        _ = k8sClient.Delete(ctx, namespace)

                        By("Removing the Image ENV VAR which stores the Operand image")
                        _ = os.Unsetenv("MYAPP_IMAGE") // Remove ENV variable after tests
                })

                It("should successfully reconcile a custom resource for MyApp", func() {
                        By("Checking if the custom resource was successfully created")
                        Eventually(func() error {
                                found := &applicationv1alpha1.MyApp{}
                                return k8sClient.Get(ctx, typeNamespaceName, found)
                        }, time.Minute, time.Second).Should(Succeed())

                        By("Reconciling the custom resource created")
                        myappReconciler := &MyAppReconciler{
                                Client: k8sClient,
                                Scheme: k8sClient.Scheme(),
                        }

                        _, err := myappReconciler.Reconcile(ctx, reconcile.Request{
                                NamespacedName: typeNamespaceName,
                        })
                        Expect(err).To(Not(HaveOccurred()))

                        By("Checking if Deployment was successfully created in the reconciliation")
                        Eventually(func() error {
                                found := &appsv1.Deployment{}
                                return k8sClient.Get(ctx, typeNamespaceName, found)
                        }, time.Minute, time.Second).Should(Succeed())

                        By("Checking the latest Status Condition added to the MyApp instance")
                        Eventually(func() error {
                                if myapp.Status.Conditions != nil &&
                                        len(myapp.Status.Conditions) != 0 {
                                        latestStatusCondition := myapp.Status.Conditions[len(myapp.Status.Conditions)-1]
                                        expectedLatestStatusCondition := metav1.Condition{
                                                Type:   typeAvailableMyApp,
                                                Status: metav1.ConditionTrue,
                                                Reason: "Reconciling",
                                                Message: fmt.Sprintf(
                                                        "Deployment for custom resource (%s) with %d replicas created successfully",
                                                        myapp.Name,
                                                        myapp.Spec.Size),
                                        }
                                        if latestStatusCondition != expectedLatestStatusCondition {
                                                return fmt.Errorf("The latest status condition added to the MyApp instance is not as expected")
                                        }
                                }
                                return nil
                        }, time.Minute, time.Second).Should(Succeed())
                })
        })
})

Modify application_v1alpha1_myapp.yaml

This YAML file defines the structure of our custom resource. Ensure it reflects the structure and default values we want for MyApp.

apiVersion: application.nbfc.io/v1alpha1
kind: MyApp
metadata:
  name: myapp-sample # Name of the MyApp resource instance
spec:
  size: 2
# List of pods to be deployed as part of this MyApp resource
  pods:
    - name: pod2
      image: ubuntu:latest
      command: ["sleep"]
      args: ["infinity"]
    - name: pod1
      image: nginx:latest
      containerPorts:
        - name: http
          containerPort: 80
    - name: pod3
      image: debian:latest
      command: ["sleep"]
      args: ["infinity"]

Modify myapp_types.go

This file defines the schema and validation for our custom resource. Ensure it aligns with the desired specification and status definitions for MyApp.

package v1alpha1

import (
        corev1 "k8s.io/api/core/v1"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// MyAppSpec defines the desired state of MyApp
type MyAppSpec struct {
        Size          int32     `json:"size,omitempty"`
        ContainerPort int32     `json:"containerPort,omitempty"`
        Pods          []PodSpec `json:"pods,omitempty"`
}

// PodSpec defines the desired state of a Pod
type PodSpec struct {
        Name           string                 `json:"name"`
        Image          string                 `json:"image"`
        Command        []string               `json:"command,omitempty"`
        Args           []string               `json:"args,omitempty"`
        ContainerPorts []corev1.ContainerPort `json:"containerPorts,omitempty"`
}

// MyAppStatus defines the observed state of MyApp
type MyAppStatus struct {
        Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"`
        Nodes      []string           `json:"nodes,omitempty"`
}

//+kubebuilder:object:root=true
//+kubebuilder:subresource:status

// MyApp is the Schema for the myapps API
type MyApp struct {
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Spec   MyAppSpec   `json:"spec,omitempty"`
        Status MyAppStatus `json:"status,omitempty"`
}

//+kubebuilder:object:root=true

// MyAppList contains a list of MyApp
type MyAppList struct {
        metav1.TypeMeta `json:",inline"`
        metav1.ListMeta `json:"metadata,omitempty"`
        Items           []MyApp `json:"items"`
}

func init() {
        SchemeBuilder.Register(&MyApp{}, &MyAppList{})
}

After the changes, make sure we run the make command to update the generate files and apply the changes.

make run

Deploy the application_v1alpha1_myapp.yaml file to our Kubernetes cluster:

kubectl apply -f application_v1alpha1_myapp.yaml

Then we check if the deployments are running.

kubectl get pods -A 

Building Kubernetes controllers with KubeBuilder and using CRDs opens up a new level of flexibility and scalability in the Kubernetes platform. Through the process above, we were able to create a custom resource (MyApp) and deploy a controller that manages the lifecycle of that resource within the cluster.

Based on the above process, we built a simple Etherpad installation demo, available to test on killercoda. Stay tuned for more k8s-related posts and tricks!