top of page

Building a Multi-Cluster Platform with Kratix and Flux

In this post we will highlight some of the design principles behind Kratix and briefly explain the whys behind them. We will then jump straight into how Kratix manages a multi-cluster Platform and how we leverage Flux to reconcile state across clusters.


When designing Kratix, we wanted to ensure that:

  1. It would treat multi-cluster Kubernetes setups as standard.

  2. It would not try to treat multiple clusters as a "single cluster".

  3. We would leverage the CNCF ecosystem where possible.


Why multi-cluster?


As previously discussed in the Challenge 11 of the 12 Platform Challenges series, while you can go a long way with a single Kubernetes cluster, there will be a point in a platform life where it needs to scale beyond that single cluster.


That’s because as other teams in the organisation start to migrate their workloads to the platform, the stack running on the cluster starts to grow quickly, since different teams will have different software requirements and preferences.


As the stack grows, so do the challenges of interoperability between software and software versions such as different teams needing different observability stacks or different CI systems, or different versions of a particular operation, adding to the overall size and complexity of your cluster deployment. A single bug can bring all of the teams’ workloads running on your Platform to a halt. And we’ve not even mentioned cluster upgrades.


The next step is to isolate teams across different Kubernetes clusters. The platform team can now handle each cluster individually and keep things simpler for the platform users. However, while the benefits for application teams is evident, things do get more complicated for the team managing the myriad of clusters now deployed, often across multiple clouds.


To simplify operations, platform teams often take the step of trying to manage all the clusters through a single pane of glass. This approach, however, often couples the clusters and ends up bringing all the problems of the single cluster setup back to the surface: long lists of APIs and CRDs, hundreds of workloads to parse, slow systems.


The evolution of the platform is then the decoupling of the platform cluster from where the workloads are running. The challenge becomes how to find an easy way to orchestrate the deployment of large, complex application stacks across various clusters, while accounting for errors and failures. This is exactly the challenge we built Kratix to solve.


How does Kratix orchestrate the workloads?


With Kratix’s design principles in mind, we needed first a way to:

  1. Ensure Kratix knows where workloads can be scheduled.

  2. Ensure that, once Kratix schedules the workloads, they are applied to the clusters.

For (1), we introduced two concepts: the Cluster and the Promise. A Kratix Cluster is a representation of any system where workloads can be scheduled to. Kratix is configured to communicate with a repository, and upon Cluster registration, it creates either a bucket or a Git repository dedicated to that Cluster.


A Kratix Promise is the building block that enables teams to design platforms that specifically meet their customer needs. It’s through Promises that the platform provides any software as-a-Service. In the Promise definition, you can find what the dependencies are for that software (worker cluster resources), what needs to happen when a new request for an instance of that software comes through the platform (request pipeline), and the API for the promise itself (CRD). For more details on Promises, check out Writing a Promise.


When a Promise is installed, Kratix will create a copy of the Promise's worker cluster resources in the repository of each registered Cluster that is permitted to receive workloads of that Promise. For example, if you install the Redis Promise from the Kratix Marketplace, a copy of the manifest for the Redis Operator plus the Redis CRDs will be persisted.


Similarly, when a new Resource Request is submitted, Kratix will trigger the Promise's pipelines, which will output the desired documents. Once all the pipelines have been executed, Kratix will determine in which of the registered Clusters the workload can be scheduled, and select a single Cluster. It will then persist the documents into the repository for that Cluster.


Note that the Kratix Cluster is indeed a concept. Kratix schedules work by placing the documents in the repository. The systems consuming these documents are often Kubernetes clusters (hence the naming), but the consumer could be anything: a VM, a service, or nothing at all. For example, when integrating Kratix with Backstage, we registered Backstage as a Kratix Cluster, and configured Backstage to watch the created bucket for its Catalog. This design allows for the systems watching the repositories to come and go, while Kratix remains able to schedule workloads to a representation of those systems.


Once the documents are in the repository, it stops being a "how to provide X-as-a-Service?" type of problem and turns into a "how to ensure that the cluster state matches the state specified in the repository?" type of problem. This second problem matches exactly the definition of GitOps.


Following our design principles, we stepped back and looked around the ecosystem to find projects that are solving the GitOps challenge. That's where Flux comes into the picture. Flux is “a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy”. Furthermore:

  • It’s simple to install and use.

  • It works with multiple repository technologies, including S3-compatible buckets and Git repositories.

  • It's flexible on the reconciliation strategy (interval, cleaning up resources, etc).

If you try Kratix following our Multi-cluster with KinD quick-start, a few things happen:

  • We create a “platform” and a “worker” Kubernetes cluster using KinD

  • On the platform cluster, we install Kratix and MinIO

  • On the worker cluster, we install Flux

  • We register the worker cluster in the platform

  • We configure Flux on the worker cluster to watch for buckets in the platform MinIO

The Flux configuration is divided into two parts:


Source configuration


First we need to tell Flux about the Source. A Source “defines the origin of a repository containing the desired state of the system and the requirements to obtain it (e.g. credentials, version selectors)”.


When you list the buckets available in the worker, you should see two:

$ kubectl --context kind-worker get buckets.source.toolkit.fluxcd.io -A
NAMESPACE     NAME                        ENDPOINT           AGE  READY   STATUS
flux-system   kratix-workload-crds        172.18.0.2:31337   3m   True    ...
flux-system   kratix-workload-resources   172.18.0.2:31337   3m   True    ...

Reconciliation configuration


Telling Flux about a Source is not enough: we need to instruct it on how it should act on that Source. That’s done through a Kustomization. A Kustomization “represents a local set of Kubernetes resources that Flux is supposed to reconcile in the cluster”.


When you list the buckets available in the worker, you should see two:

$ kubectl --context kind-worker get kustomizations.kustomize.toolkit.fluxcd.io -A
NAMESPACE     NAME                        AGE  READY   STATUS
flux-system   kratix-workload-crds        3m   True    Applied revision: ...
flux-system   kratix-workload-resources   3m   True    Applied revision: ...

You can inspect the documents for the Flux configuration here.


With these two bits of configuration, we tell flux that there are buckets running at a particular location and for it to reconcile any changes to the bucket.


If at this point you list the namespaces on the worker cluster, you should notice a "kratix-worker-system" namespace. That’s because Kratix will, at cluster registration time, write a namespace document to the bucket, which in turn is applied to the worker cluster by Flux. You can see the document by checking the "worker-cluster-1-kratix-resources'' bucket in MinIO. For that, run the command below and access the MinIO Console at http://localhost:9000 (default credentials: minioadmin/minioadmin).


# CTRL+C to terminate
kubectl --context kind-platform port-forward deployment/minio 9000 42435

Although the standard installation of Kratix is backed by Flux, we are not opinionated on the GitOps toolset you should use. You could set up your Worker Clusters with ArgoCD, Jenkins X, Harness CD, a custom script, or any other toolkit. Once the document is persistent in the repository, Kratix is happy that the work has been scheduled.


By leveraging the GitOps toolkit to do the heavy lifting of keeping clusters in sync with a desired state, we are able to focus our efforts on the core differentiators of Kratix, while at the same time giving platform operators and users the flexibility to use the tools they prefer. To see Kratix in action, head to our documentation and follow one of our guides.


We’d also love to hear your feedback on the architecture we propose above, understand more about the challenges you are facing as you scale your platform, or learn more about how you are leveraging GitOps in your platform. Please get in touch either by leaving a comment below or by scheduling a chat!


639 views0 comments
bottom of page