Kubernetes Operators Part 4: Time to get real
So, now we are getting to the real talk - an operator that deploys a specific software with several components automatically AND by being called by an onboarding app of the client. What this involves:
- Deploying the application as a very basic package version
- Let an API curl create the instance in the cluster
- Add additional app profiles for more elaborate packages
- Add prewarmed pools so that client onboarding goes faster
All of this is for sure too much for one article, but we start small again and then take it from there :)
Project Bootstrapping
Again, we start our new project by initializing kubebuilder
kubebuilder init --domain bnerd-operator.local --repo client-app
followed by building a basic API with
kubebuilder create api \
--group apps \
--version v1alpha1 \
--kind ClientApp \
--resource \
--controller
Having this, we want to start with a very simple app using its official helm chart. No additional config like Redis, PostgreSQL or anything. Step one for this article: Make the operator deploy a minimal app version.
Defining the ClientApp
As always, we first need to update the type definitions and add some logic to our controller code. What we need for the minimal setup:
- The hostname where the client app will be reachable
- A reference to a Secret that holds the admin password (never put passwords directly in YAML!)
- The Helm chart version to use
clientapp_types.go
type ClientAppSpec struct {
// +kubebuilder:validation:Required
// +kubebuilder:validation:MinLength=1
Hostname string `json:"hostname"`
// +kubebuilder:validation:Required
// +kubebuilder:validation:MinLength=1
AdminSecret string `json:"adminSecret"`
// +optional
// +kubebuilder:default="5.5.2"
ChartVersion string `json:"chartVersion,omitempty"`
// +optional
IngressClassName string `json:"ingressClassName,omitempty"`
}
We’ll also add some kubebuilder markers at the ClientApp struct, so kubectl shows us useful info:
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:resource:shortName=ca;cap;clientapps
// +kubebuilder:printcolumn:name="Hostname",type=string,JSONPath=".spec.hostname"
// +kubebuilder:printcolumn:name="Ready",type=string,JSONPath=".status.conditions[?(@.type=='Ready')].status"
// +kubebuilder:printcolumn:name="Age",type=date,JSONPath=".metadata.creationTimestamp"
type ClientApp struct { … }
Now kubectl get clientapps will show:
NAME HOSTNAME READY AGE
clientapp-sample clientapp.example.com True 5m
Given we want to use Helm, we need the Go library for it before starting with the reconciliation logic by running:
go get helm.sh/helm/v3@latest
go mod tidy
This now gives us full programmatic access to every helm command. 🎉
Define the controller
Let’s dig now into our controller file.
First thing to adapt is our ClientAppReconciler struct: While client.Client is the controller-runtime client provided to us by default (the bit that lets you read and write any Kubernetes resource), we also need to add RestConfig - a raw Kubernetes REST configuration — as the Helm SDK needs this to deploy things into our cluster. The struct will therefore need to change as follows:
type ClientAppReconciler struct {
client.Client
Scheme *runtime.Scheme
RestConfig *rest.Config
}
Right above our Reconcile function, markers declare the permissions the operator needs:
// +kubebuilder:rbac:groups=apps.bnerd-operator.local,resources=clientapps,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=apps.bnerd-operator.local,resources=clientapps/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps.bnerd-operator.local,resources=clientapps/finalizers,verbs=update
// +kubebuilder:rbac:groups="",resources=secrets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",resources=configmaps;serviceaccounts;services;persistentvolumeclaims,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=apps,resources=deployments;statefulsets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=batch,resources=jobs;cronjobs,verbs=get;list;watch;create;update;patch;delete
make manifests will turn these into a ClusterRole YAML automatically.
The function itself shall look like this:
func (r *ClientAppReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := logf.FromContext(ctx)
instance := &appsv1alpha1.ClientApp{}
if err := r.Get(ctx, req.NamespacedName, instance); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
if !instance.DeletionTimestamp.IsZero() {
return r.handleDeletion(ctx, instance)
}
if !controllerutil.ContainsFinalizer(instance, clientAppFinalizer) {
controllerutil.AddFinalizer(instance, clientAppFinalizer)
if err := r.Update(ctx, instance); err != nil {
return ctrl.Result{}, fmt.Errorf("failed to add finalizer: %w", err)
}
return ctrl.Result{Requeue: true}, nil
}
adminPassword, err := r.getAdminPassword(ctx, instance)
if err != nil {
log.Error(err, "Failed to fetch admin password")
if updateErr := r.updateCondition(ctx, instance, metav1.Condition{
Type: "Ready",
Status: metav1.ConditionFalse,
Reason: "AdminSecretUnavailable",
Message: err.Error(),
}); updateErr != nil {
log.Error(updateErr, "Failed to update status condition")
}
return ctrl.Result{}, err
}
values := map[string]interface{}{
// we are using Nextcloud as an example, so "nextcloud" is the chart-specific top-level values key
"nextcloud": map[string]interface{}{
"host": instance.Spec.Hostname,
"password": adminPassword,
},
}
if instance.Spec.IngressClassName != "" {
values["ingress"] = map[string]interface{}{
"enabled": true,
"className": instance.Spec.IngressClassName,
}
}
if err := r.reconcileHelmRelease(ctx, instance, values); err != nil {
log.Error(err, "Failed to reconcile Helm release")
if updateErr := r.updateCondition(ctx, instance, metav1.Condition{
Type: "Ready",
Status: metav1.ConditionFalse,
Reason: "HelmReconcileFailed",
Message: err.Error(),
}); updateErr != nil {
log.Error(updateErr, "Failed to update status condition")
}
return ctrl.Result{}, err
}
if err := r.updateCondition(ctx, instance, metav1.Condition{
Type: "Ready",
Status: metav1.ConditionTrue,
Reason: "HelmReleaseReconciled",
Message: "ClientApp Helm release reconciled successfully",
}); err != nil {
log.Error(err, "Failed to update status condition")
}
log.Info("Reconciliation complete", "hostname", instance.Spec.Hostname)
return ctrl.Result{RequeueAfter: requeueInterval}, nil
}
Finalizers
If you scan through the code above, you will stumble across finalizers - so let’s have a quick look at what these are.
A finalizer is a string you add to a resource’s metadata. While that string is present, Kubernetes will not actually delete the resource — it will only set the DeletionTimestamp. This gives the controller a chance to run clean-up logic first (in our case: helm uninstall). The deletion function would then look as follows:
func (r *ClientAppReconciler) handleDeletion(ctx context.Context, instance *appsv1alpha1.ClientApp) (ctrl.Result, error) {
releaseName := "clientapp-" + instance.Name
actionConfig, _ := r.buildActionConfig(ctx, instance.Namespace)
// Run helm uninstall
uninstall := action.NewUninstall(actionConfig)
if _, err := uninstall.Run(releaseName); err != nil && !errors.Is(err, helmdriver.ErrReleaseNotFound) {
return ctrl.Result{}, err
}
// Remove the finalizer — now Kubernetes will delete the resource
controllerutil.RemoveFinalizer(instance, clientAppFinalizer)
return ctrl.Result{}, r.Update(ctx, instance)
}
Installing & upgrading via the Helm Go SDK
The Helm SDK follows the same install-vs-upgrade decision you would make manually. We check release history to decide which action to take:
func (r *ClientAppReconciler) reconcileHelmRelease(
ctx context.Context,
instance *appsv1alpha1.ClientApp,
values map[string]interface{},
) error {
releaseName := "clientapp-" + instance.Name
actionConfig, _ := r.buildActionConfig(ctx, instance.Namespace)
hist := action.NewHistory(actionConfig)
hist.Max = 1
_, histErr := hist.Run(releaseName)
releaseExists := histErr == nil
chrt, _ := r.loadChart(instance.Spec.ChartVersion)
if !releaseExists {
install := action.NewInstall(actionConfig)
install.ReleaseName = releaseName
install.Namespace = instance.Namespace
_, err := install.Run(chrt, values)
return err
}
upgrade := action.NewUpgrade(actionConfig)
upgrade.Namespace = instance.Namespace
_, err := upgrade.Run(releaseName, chrt, values)
return err
}
Bridging Helm and controller-runtime — the RESTClientGetter
The Helm SDK needs a RESTClientGetter
to talk to the cluster. While the operator already has a *rest.Config we added to the struct before, we need a small adapter to bring them together:
type operatorRESTClientGetter struct {
config *rest.Config
namespace string
}
func (g *operatorRESTClientGetter) ToRESTConfig() (*rest.Config, error) {
return g.config, nil
}
func (g *operatorRESTClientGetter) ToDiscoveryClient() (discovery.CachedDiscoveryInterface, error) {
disc, err := discovery.NewDiscoveryClientForConfig(g.config)
if err != nil {
return nil, err
}
return memory.NewMemCacheClient(disc), nil
}
func (g *operatorRESTClientGetter) ToRESTMapper() (apimeta.RESTMapper, error) {
dc, err := g.ToDiscoveryClient()
if err != nil {
return nil, err
}
return restmapper.NewDeferredDiscoveryRESTMapper(dc), nil
}
func (g *operatorRESTClientGetter) ToRawKubeConfigLoader() clientcmd.ClientConfig {
cfg := clientcmdapi.NewConfig()
cfg.Clusters["default"] = &clientcmdapi.Cluster{
Server: g.config.Host,
CertificateAuthorityData: g.config.TLSClientConfig.CAData,
}
cfg.AuthInfos["default"] = &clientcmdapi.AuthInfo{
Token: g.config.BearerToken,
TokenFile: g.config.BearerTokenFile,
ClientCertificateData: g.config.TLSClientConfig.CertData,
ClientKeyData: g.config.TLSClientConfig.KeyData,
}
cfg.Contexts["default"] = &clientcmdapi.Context{
Cluster: "default",
AuthInfo: "default",
Namespace: g.namespace,
}
cfg.CurrentContext = "default"
return clientcmd.NewDefaultClientConfig(*cfg, &clientcmd.ConfigOverrides{
Context: clientcmdapi.Context{Namespace: g.namespace},
})
}
This is the glue that lets Helm work entirely in-process, without ever calling out to a helm binary.
Wiring it all together in main.go
cmd/main.go is the entry point. The only change needed is to pass RestConfig to the reconciler:
if err := (&controller.ClientAppReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
RestConfig: mgr.GetConfig(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "Failed to create controller", "controller", "ClientApp")
os.Exit(1)
}
And off we go :)
Deploying the operator to our cluster
Let’s finally see the operator in action within our cluster and check what it deploys for us.
Given we made quite some changes, first thing is to update the manifests & build our operator by running:
# Regenerate CRD YAML + RBAC ClusterRole from markers
make manifests
# Regenerate DeepCopy methods (needed after struct changes)
make generate
# Verify the code compiles
go build ./...
# Run unit tests
make test
Let’s now get our deployment on its way by running
make install
docker build -t client-app-operator:v1 .
make deploy IMG=client-app-operator:v1
And there we have the operator in our cluster

as well as our CRD

In order to create a new instance, we first need to generate the admin secret by running
kubectl create secret generic my-admin-secret --from-literal=adminPassword=changeme
then we can use kubectl proxy to create an instance via curl:
kubectl proxy --port=8080
curl -X POST http://localhost:8080/apis/apps.bnerd-operator.local/v1alpha1/namespaces/default/clientapps \
-H "Content-Type: application/json" \
-d '{
"apiVersion": "apps.bnerd-operator.local/v1alpha1",
"kind": "ClientApp",
"metadata": {
"name": "my-instance",
"namespace": "default"
},
"spec": {
"hostname": "my-instance.127.0.0.1.nip.io",
"adminSecret": "my-admin-secret",
"chartVersion": "9.0.3",
"ingressClassName": "traefik"
}
}'
There we have our instance 🎉
BUT, this is not really the automatic way, right? So there are two more things to tackle now - get rid of the need to run kubectl proxy. And let the operator handle the secret generation automatically.
Adding an HTTP API
So, in order to get rid of the kubectl proxy command, we will add a lightweight HTTP API server built into the operator itself, so we can manage instances with plain curl.
The server runs on port 8090 alongside the controller. It exposes four endpoints:
| Method | Path | Description |
|---|---|---|
POST | /instances | Create a new ClientApp |
GET | /instances | List all ClientApps |
GET | /instances/{name} | Get a specific ClientApp |
DELETE | /instances/{name} | Delete a ClientApp |
So finally, having the operator in the cluster updated, we can just run
curl -X POST http://localhost:8090/instances \
-H "Content-Type: application/json" \
-d '{
"name": "my-instance",
"hostname": "my-instance.local",
"adminSecret": "my-admin-secret",
"ingressClassName": "traefik"
}'
to create an instance. Way easier for playing around further. 😅
Let the operator auto-generate our admin passwords
So, we really do not want to manually create Secrets before spinning up an instance - this is something the operator should also do for us. So what we want it to do:
- Generate a cryptographically random 32-character password using
crypto/rand - Create a Secret named
<instance-name>-admin-secretin the same namespace - Set the Secret’s owner reference to the
ClientApp- so it is automatically garbage-collected when the instance is deleted - Store the Secret name and the plain-text password in
.statusso onboarding tools can retrieve them without reading Secrets directly
This involves the following code changes:
clientapp_types.go
We first want to set the admin secret optional in the struct via
type ClientAppStatus struct {
Conditions []metav1.Condition `json:"conditions,omitempty"`
// +optional
AdminSecretName string `json:"adminSecretName,omitempty"`
// +optional
AdminPassword string `json:"adminPassword,omitempty"`
}
and add a print column so the Secret name appears when running kubectl get clientapps.
// +kubebuilder:printcolumn:name="Admin Secret",type=string,JSONPath=".status.adminSecretName"
clientapp_controller.go
Instead of retrieving the password via the getAdminPassword function, we now want to check whether the Secret already exists. If it does, it returns the stored password. If not, it generates a fresh one and creates the Secret with an owner reference. The reconcile loop now calls another function:
func (r *ClientAppReconciler) ensureAdminSecret(ctx context.Context, instance *appsv1alpha1.ClientApp) (secretName, password string, err error) {
secretName = instance.Spec.AdminSecret
if secretName == "" {
secretName = instance.Name + "-admin-secret"
}
secret := &corev1.Secret{}
getErr := r.Get(ctx, types.NamespacedName{Name: secretName, Namespace: instance.Namespace}, secret)
if getErr == nil {
return secretName, string(secret.Data["adminPassword"]), nil
}
if instance.Spec.AdminSecret != "" {
return "", "", fmt.Errorf("secret %q not found", secretName)
}
pw, _ := generatePassword(32)
newSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{Name: secretName, Namespace: instance.Namespace},
StringData: map[string]string{"adminPassword": pw},
}
controllerutil.SetControllerReference(instance, newSecret, r.Scheme)
r.Create(ctx, newSecret)
return secretName, pw, nil
}
And there we go - we can now create a fully working instance with a single curl call:
curl -X POST http://localhost:8090/instances \
-H "Content-Type: application/json" \
-d '{
"name": "my-instance",
"hostname": "my-instance.local",
"ingressClassName": "traefik"
}'
The POST response from the API server prints us the secret and also kubectl describe clientapp my-instance will return it.
Summing up
We now have an operator that can create basic instances via API call 🚀 But there is still quite some work to do :) Next up in part 5 of this series: Let the operator automatically name the instances in their own namespace & add additional components like Redis & a PostgreSQL database.
header image created by buddy ChatGPT
