ArgoCD in Practice: Installation, Configuration, and Application Management

ArgoCD in Practice: Installation, Configuration, and Application Management

Disclaimer: This post was originally written for my company’s blog . I’ve translated it and added a personal touch for this space.

At work, we mostly rely on Flux for projects we manage ourselves. However, many of our clients use ArgoCD in their own environments. So naturally, I spent some time digging into it - which eventually turned into a small blog series about GitOps and both tools. This article is the first part of the ArgoCD side of that journey 🙂

We’ll start with the basics: installing ArgoCD, connecting it to a Git repository, and deploying applications. Along the way, we’ll get hands-on with ArgoCD’s core concepts — Application, AppProject, and ApplicationSet — not as abstract ideas, but as part of a working setup.

As a practical example, we’ll use a simple demo application built with Helm, consisting of a frontend and a backend. We’ll begin with a single environment and later expand the setup to multiple environments. The only assumption is that the application already exists, builds successfully, and has been deployed to a Kubernetes cluster at least once.

Prerequisites

If you want to follow along, here’s what you’ll need:

  • A Git repository containing Helm charts for a frontend and a backend
  • A GitLab pipeline that builds the images and pushes them to the GitLab Container Registry
  • A Kubernetes cluster with access to the GitLab registry (e.g. via a GitLab Deploy Token stored as a Kubernetes Secret and referenced in the Helm chart)

Installing ArgoCD and the ArgoCD CLI

We’ll start by creating a dedicated namespace for ArgoCD:

kubectl create namespace argocd

Next, we deploy ArgoCD into the cluster using the official manifests:

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This spins up several Kubernetes resources, including:

  • argocd-server: The web UI and API — this is where we can interact with ArgoCD

  • argocd-repo-server: Responsible for rendering Helm charts or Kustomize setups from Git

  • argocd-application-controller: Continuously compares the desired state from Git with what’s actually running in the cluster

  • argocd-dex-server: (Optional) identity provider for SSO setups

  • argocd-redis: A Redis instance used internally for caching

  • Additional resources like argocd-cm and argocd-rbac-cm that control ArgoCD’s behavior

To interact with ArgoCD from the terminal (which is often faster than clicking around), we’ll also install the ArgoCD CLI:

# Linux

curl -sSL -o argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
chmod +x argocd
sudo mv argocd /usr/local/bin/

# macOS

brew install argocd

# Windows

choco install argocd

With ArgoCD and the CLI in place, we’re ready to start preparing our deployment.

ArgoCD Dashboard

Although we’ll mostly use the CLI in this article, it’s worth taking a quick look at the UI. First, we’ll need to port-forward the ArgoCD server:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Next, retrieve the auto-generated admin password:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

You can now log in to the UI at http://localhost:8080 with the username admin and the password you just retrieved.

Under Settings -> Clusters, you’ll see that the current cluster is already connected. Internally, ArgoCD references it using: https://kubernetes.default.svc. We’ll use that URL again later.

Connecting the GitLab Repository

For ArgoCD to deploy anything, it needs access to our Git repository. In this example, we’ll use a GitLab Deploy Token with read-only access.

In GitLab:

  1. Go to Settings -> Repository -> Deploy Tokens
  2. Create a new token with Read Repository permissions.
  3. Copy the username and password right away (the password won’t be shown again).

Now log in to ArgoCD via the CLI:

argocd login localhost:8080 \
 --insecure \
 --username admin \
 --password $(kubectl get secret \
 -n argocd \
 argocd-initial-admin-secret \
 -o jsonpath="{.data.password}" | base64 -d)

Then add the repository:

argocd repo add <repository link> --username <deploy-token-name> --password <deploy-token-password>

If everything worked, the repository will show up under Settings → Repositories in the UI - ideally marked as Successful.

Deploying Our Application

Now comes the interesting part. This is where ArgoCD’s main concepts come together:

Application: Describes what to deploy, from where, and to which cluster and namespace. It’s the core unit ArgoCD uses to reconcile Git and Kubernetes.

AppProject: Defines boundaries and rules for Applications and is especially useful when multiple teams or environments are involved

ApplicationSet: A generator for Applications, allowing you to manage many similar deployments with very little config.

For our demo, we’ll start simple and deploy only the dev environment.

Creating ArgoCD Applications

Let’s define two separate Applications – one for the frontend and one for the backend.

frontend.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: frontend-dev
  namespace: argocd
spec:
  project: default
  source:
    repoURL: <Repository Link>
    targetRevision: main
    path: <Path to Helm Chart>
    helm:
      valueFiles:
        - values/dev.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: dev
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

backend.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: backend-dev
  namespace: argocd
spec:
  project: default
  source:
    repoURL: <Repository Link>
    targetRevision: main
    path: <Path to Helm Chart>
    helm:
      valueFiles:
        - values/dev.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: dev
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

We can then apply them to the cluster:

kubectl apply -f frontend.yaml
kubectl apply -f backend.yaml

From here on, ArgoCD takes over: it watches Git, compares desired and actual state, and keeps the cluster in sync automatically.

Scaling with ApplicationSets

Real projects rarely stop at a single environment. Instead of copying Applications for dev, stage, and prod, we can use an ApplicationSet.

Here’s a simple list-based example:

applicationset.yaml

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: multi-env-frontends
  namespace: argocd
spec:
  generators:
    - list:
        elements:
          - env: dev
          - env: stage
          - env: prod
  template:
    metadata:
      name: "{{env}}-frontend"
    spec:
      project: default
      source:
        repoURL: <Repository Link>
        targetRevision: main
        path: <Path to Helm Chart>
        helm:
          valueFiles:
            - values/{{env}}.yaml
      destination:
        server: https://kubernetes.default.svc
        namespace: "{{env}}"
      syncPolicy:
        automated:
          prune: true
          selfHeal: true

The backend follows the same pattern.

We can then apply this ApplicationSet via:

kubectl apply -f applicationset.yaml

ArgoCD now generates one Application per environment automatically. Other generators (Git, matrix, cluster-based) can make this even more dynamic - worth exploring once the basics are in place.

Using AppProjects

So far, we’ve used the default project everywhere. That’s fine for demos, but not ideal in larger setups.

AppProjects let you define boundaries: which repositories are allowed, which namespaces can be targeted, and which clusters are in scope.

A simple example:

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: demo-project
  namespace: argocd
spec:
  description: Demo project for frontend and backend
  sourceRepos:
    - https://gitlab.com/your-group/your-repo
  destinations:
    - namespace: dev
      server: https://kubernetes.default.svc
    - namespace: stage
      server: https://kubernetes.default.svc
    - namespace: prod
      server: https://kubernetes.default.svc
  clusterResourceWhitelist:
    - group: "*"
      kind: "*"
  namespaceResourceWhitelist:
    - group: "*"
      kind: "*"

This completes our walkthrough of the three main ArgoCD concepts. 🎉

Wrapping up

In this article, we walked through ArgoCD from installation to managing real applications in a GitOps-style workflow. Starting with a simple setup, we deployed frontend and backend services, then scaled the same configuration across multiple environments using ApplicationSets. Along the way, we touched on the core building blocks of ArgoCD—Applications, ApplicationSets, and AppProjects—and how they fit together in practice.

While the examples here are intentionally simple, they already cover most of what you need for day-to-day GitOps workflows. From here, it’s easy to extend the setup with stricter access controls, more advanced generators, or integrations with CI pipelines and secrets management.