Kubernetes Operators Part 5: Enhancing
Our basic operator is up and running, but there is still quite some work to do. In this article, we want to focus on two things: Let the operator create instance name & namespace - and add Redis and PostgreSQL to the picture. Let’ start :)
Automate namespace and name creation
First thing: We want the instances to have haiku-inspired names - and their own namespaces with the same naming convention. What this means? Our instance is called e.g. ca-bold-leaf-j5cvv1 and lives in the namespace ca-bold-leaf-j5cvv1. To achieve this, we can create a simple name generator function:
var adjectives = []string{
"bold", "calm", "dark", "fast", "gold", "hard", "kind", "lean",
// ...
}
var nouns = []string{
"leaf", "wave", "peak", "dawn", "dusk", "mist", "rain", "snow",
// ...
}
func generateHaikuName() (string, error) {
b := make([]byte, 7)
if _, err := cryptorand.Read(b); err != nil {
return "", err
}
adj := adjectives[int(b[0])%len(adjectives)]
noun := nouns[int(b[1])%len(nouns)]
suffix := strings.ToLower(base64.RawURLEncoding.EncodeToString(b[2:])[:6])
return fmt.Sprintf("ca-%s-%s-%s", adj, noun, suffix), nil
}
When a name is auto-generated, we’ll also create a dedicated Kubernetes namespace with that name before creating the ClientApp CR:
if autoNamespace {
namespace := &corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{Name: ns},
}
s.client.Create(r.Context(), namespace)
}
And there we go - all ClientApps and corresponding resources now live in the same namespace. 🎉
Having this, we can now create new instances with a very minimal curl request
curl -X POST http://localhost:8090/instances \
-H "Content-Type: application/json" \
-d '{"hostname": "my-instance.local", "ingressClassName": "traefik"}'
and te response contains everything needed:
{
"name": "ca-bold-leaf-j5cvv1",
"namespace": "ca-bold-leaf-j5cvv1",
"hostname": "my-instance.local",
"adminSecret": "ca-bold-leaf-j5cvv1-admin-secret",
"adminPassword": "abc123...",
"chartVersion": "5.5.2"
}

Adding Redis to the picture
The basic app deployment works, but for any real-world usage we want a proper cache backend. Our app’s Helm chart has already built-in support for Redis, so we just needs to wire it up correctly, so that the operator handles everything automatically.
We first start with adapting our API. Redis shall be enabled by default, so we have to opt out explicitly to not have it deployed via
spec:
redis:
enabled: false
For this, we add a RedisSpec struct to clientapp_types.go:
type RedisSpec struct {
// +optional
// +kubebuilder:default=true
Enabled bool `json:"enabled"`
}
type ClientAppSpec struct {
// ...existing fields...
// +optional
Redis *RedisSpec `json:"redis,omitempty"`
}
The operator also records the generated Secret name in status so tooling can find it:
type ClientAppStatus struct {
// ...existing fields...
// +optional
RedisSecretName string `json:"redisSecretName,omitempty"`
}
Next we want to auto-generate the Redis admin password via a ensureRedisSecret function which also stores it in a Secret with an owner reference pointing to the ClientApp CR, so it is automatically garbage-collected when the instance is deleted:
func (r *ClientAppReconciler) ensureRedisSecret(ctx context.Context, instance *appsv1alpha1.ClientApp) (string, error) {
secretName := instance.Name + "-redis-secret"
secret := &corev1.Secret{}
getErr := r.Get(ctx, types.NamespacedName{Name: secretName, Namespace: instance.Namespace}, secret)
if getErr == nil {
return secretName, nil // already exists
}
pw, err := generatePassword(32)
if err != nil {
return "", fmt.Errorf("failed to generate Redis password: %w", err)
}
newSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: instance.Namespace,
},
StringData: map[string]string{
"redis-password": pw,
},
}
controllerutil.SetControllerReference(instance, newSecret, r.Scheme)
r.Create(ctx, newSecret)
return secretName, nil
}
Last but not least, we are wiring the password into Helm via existingSecret:
if instance.Spec.Redis != nil && instance.Spec.Redis.Enabled {
redisSecretName, err := r.ensureRedisSecret(ctx, instance)
// ...error handling...
instance.Status.RedisSecretName = redisSecretName
values["redis"] = map[string]interface{}{
"enabled": true,
"auth": map[string]interface{}{
"enabled": true,
"existingSecret": redisSecretName,
"existingSecretPasswordKey": "redis-password",
},
}
}
We also need to add permissions, so that our perator can manage Redis
// +kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses;networkpolicies,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=policy,resources=poddisruptionbudgets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=roles;rolebindings,verbs=get;list;watch;create;update;patch;delete
and adapt our HTTP API
type CreateRequest struct {
// ...
Redis *bool `json:"redis,omitempty"`
}
// Redis is enabled by default; only disabled when explicitly set to false.
redisEnabled := req.Redis == nil || *req.Redis
redisSpec := &appsv1alpha1.RedisSpec{Enabled: redisEnabled}
And there we have Redis included in our operator. :)
Adding PostgreSQL via Percona
Our ClientApp needs a PostgreSQL database and durable storage. In order to provision a dedicated PerconaPGCluster for each instance and avoid manual setup, we’ll use the Percona PostgreSQL Operator
.
Three new optional spec fields will cover the full setup, all enabled by default:
spec:
postgres:
enabled: true
version: "18"
storageSize: "1Gi"
instances: 1
persistence:
enabled: true
size: "8Gi"
storageClassName: ""
These map to two new structs in clientapp_types.go:
type PostgresSpec struct {
// +kubebuilder:default=true
Enabled bool `json:"enabled"`
// +kubebuilder:default="1Gi"
StorageSize string `json:"storageSize,omitempty"`
// +kubebuilder:default="15"
Version string `json:"version,omitempty"`
// +kubebuilder:default=1
Instances int32 `json:"instances,omitempty"`
}
type PersistenceSpec struct {
// +kubebuilder:default=true
Enabled bool `json:"enabled"`
// +kubebuilder:default="8Gi"
Size string `json:"size,omitempty"`
StorageClassName string `json:"storageClassName,omitempty"`
}
Both structs are also needed within our ClientAppSpec
type ClientAppSpec struct {
// ...existing fields...
// +optional
Postgres *PostgresSpec `json:"postgres,omitempty"`
// +optional
Persistence *PersistenceSpec `json:"persistence,omitempty"`
}
and the ClientAppStatus needs to know about the cluster name:
type ClientAppStatus struct {
// ...existing fields...
// +optional
PostgresClusterName string `json:"postgresClusterName,omitempty"`
}
Given the ClientApp’s database requirements are very specific, I will not cover all steps needed to have it running for our app. But when it comes to operator implementations, the following steps are necessary:
We first need to define the specs for Percona and set owner reference on the cluster so the full setup is getting deleted when the ClientApp is deleted.
Given the Percona PGO creates the user credential Secret and the pgBouncer proxy asynchronously, our operator should wait for both before proceeding with any Helm install actions:
// 1. Wait for the credential secret
credSecretName := clusterName + "-pguser-clientapp"
if err := r.Get(ctx, types.NamespacedName{Name: credSecretName, ...}, credSecret); err != nil {
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
}
// 2. Wait for pgBouncer endpoints to have ready addresses
pgBouncerSvcName := clusterName + "-pgbouncer"
endpoints := &corev1.Endpoints{}
if err := r.Get(ctx, types.NamespacedName{Name: pgBouncerSvcName, ...}, endpoints);
err != nil || len(endpoints.Subsets) == 0 {
return ctrl.Result{RequeueAfter: 10 * time.Second}, nil
}
We then can wire everything in Helm by
host := fmt.Sprintf("%s-pgbouncer.%s.svc.cluster.local", clusterName, instance.Namespace)
values["externalDatabase"] = map[string]interface{}{
"enabled": true,
"type": "postgresql",
"host": host,
"user": "clientapp",
"database": "clientapp",
"existingSecret": map[string]interface{}{
"enabled": true,
"secretName": credSecretName,
"usernameKey": "user",
"passwordKey": "password",
},
}
Final steps include adding additional RBAC markers for percona
// +kubebuilder:rbac:groups=pgv2.percona.com,resources=perconapgclusters,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",resources=endpoints,verbs=get;list;watch
and adapting the HTTP API
type CreateRequest struct {
// ...existing fields...
Postgres *bool `json:"postgres,omitempty"`
Persistence *bool `json:"persistence,omitempty"`
}
And there we go - the operator spins up new ClientApps with a database. 🥳
Summing up
Quite some work, but now we have an operator that can create app instances with autogenerated names and namespaces, and with Redis and PostgreSQL enabled, by an API call.
While this is a great success, there are still some open topics to tackle: The clientapp comes in different versions. There is a basic free version, and a premium paid version. So what we want to implement next: Let the operator deploy different app versions. How? Stay tuned for part 6 :)
header image created by buddy ChatGPT
