Closing the GitOps Gap: Delivering Kratix Promises with Flux
- Okesanya Odunayo Samuel

- Oct 9
- 10 min read
Updated: Oct 31
Many organisations have successfully adopted GitOps for application delivery, but platform infrastructure provisioning often remains stubbornly manual, especially within enterprises. While applications flow seamlessly from Git commits to production deployments, platform teams still rely on kubectl commands, custom scripts, and manual coordination across clusters.
This creates an operational disconnect: applications get declarative configuration and automated rollbacks, while the infrastructure supporting them requires manual intervention and lacks audit trails.
Kratix's native integration with GitOps tools changes this dynamic entirely. Instead of treating platform provisioning as a separate concern, you can extend your existing GitOps workflows to platform operations themselves. Whether you're using Flux, ArgoCD, or other GitOps agents, the same principles apply.
And by the end of this guide, you’ll see what this looks like in practice. You’ll request a Redis instance with a simple YAML manifest, Kratix will process the request, and Flux will automatically deploy the resources across your clusters.
The platform engineering GitOps challenge

Platform teams face a fundamental workflow inconsistency that grows more problematic as organisations scale. Application teams submit pull requests and see new services running in production within minutes. When those same teams need infrastructure, such as databases, message queues, or monitoring stacks, the process reverts to manual coordination with multi-day turnaround times.
This isn't just a tooling problem. Traditional GitOps tools excel at application deployment but lack the abstraction layer platform teams need. For instance, a platform team can use Flux to deploy a Redis operator. However, for application developers to request an instance from that operator, they still need to understand CRDs, operator configuration, and cluster topology.
Even when developers can manage these complexities, enforcing organisational policies around security, compliance, and resource allocation remains outside the scope of the operator and bolted on with hard-to-maintain scripts or manual processes.
Kratix addresses all these by providing the abstraction layer that integrates seamlessly with existing Flux workflows. Developers get simple APIs to request services, while Flux handles the deployment mechanics you already trust.
Below is a comparison showing how platform teams can reduce operational overhead by switching from manual Redis deployments to developer self-service using Kratix.
The manual way
Install the Redis operator:
The commands above install the Redis operator to manage Redis clusters in Kubernetes and handle production complexity.
Create a namespace for the specific Redis instance:
This sets up an isolated namespace for the Redis instance to manage resources and permissions.
Create a Redis instance using the operator's CRD:
Deploys a highly available Redis cluster with 3 Redis nodes and 3 sentinel nodes for automatic failover.
Wait for Redis to be ready and get connection details:
Monitors the deployment and extracts connection credentials to share with developers.
Update application configuration with Redis connection details:
Manually update the application configuration with the Redis connection string.
The Kratix way: developer self-service
Developers request a Redis instance through a simple API. The platform handles provisioning, configuration, and connection setup automatically.
Setting up your environment
Before you can follow along with this article, you need the following installed on your machine:
Kubectl: Command-line tool for interacting with Kubernetes clusters.
Ability to create and access Kubernetes clusters: Although we'll use KinD for this demo, you can also use cloud providers such as GKE, EKS, or AKS, or other local tools like Minikube. .
Git repository access (GitHub account): Required to host the Kratix state store and manage configuration through GitOps.
Docker (for KinD): Container runtime needed to run KinD clusters locally.
NOTE: This demo uses a multi-cluster setup because that's how platform engineering works in practice. You need separation between your control plane and the clusters running production workloads.
This architecture allows you to use lightweight clusters for the platform while running workloads on production-grade infrastructure.
First, let's establish our clusters:
Create the clusters with a specific Kubernetes version
You should see output confirming your clusters are running:
Set up environment variables (Kind automatically creates contexts as "kind-<cluster-name>")
Verify connectivity
You should see a similar output:
The platform cluster hosts Kratix and serves as your control plane for internal platform services. The worker cluster runs the actual workloads that your teams request. This separation ensures platform operations don't interfere with production workloads while maintaining centralised governance.
Step 1: Connecting Kratix to Git
Most Kratix demonstrations use MinIO buckets for state storage. This works great, but for a truly native GitOps workflow, all declared state should be maintained using Git.
Before we install Kratix, we need to install cert-manager for webhook certificates:
Install cert-manager:
You'll see cert-manager resources being created:
Wait for cert-manager to be ready:
Installing Kratix
Install Kratix on the platform cluster with the command below:
Wait for Kratix to be ready:
You'll see Kratix components being created:
Configuring Git integration
Now that Kratix is installed, we can set it up to use Git for state storage.
Start by creating a GitHub repository to use as your Kratix state store. It helps to initialise the repo with a README so you don’t run into problems with an empty repository:
Export your GitHub username and token:
Create a GitHub repository using the GitHub CLI:
If you don't have GitHub CLI configured, create the repository through the GitHub web interface and ensure that you initialise it with a README file.
Kratix needs authentication to push configuration changes to your GitHub repository. For this reason, you’ll need a Personal Access Token with permission to read and write to a repository.
To create one, follow the steps found in the official GitHub documentation and ensure you only attach full repository permissions.
Next, you'll need to export your GitHub credentials and create a secret for Kratix to use:
Create the secret using the exported variables:
Now, configure Kratix to use Git as its state store. The GitStateStore tells Kratix where to commit the manifests it generates, essentially making your Git repository the single source of truth for all platform operations.
Create the GitStateStore configuration file and apply it. In this case, we called this file git-state-store.yaml:
This configuration uses basic authentication with your GitHub credentials, organises all manifests under the destinations/ directory, and points to your state store repository.
The commands below set up and verify the Git state store on the platform cluster.
When Kratix successfully connects, the GitStateStore will show as "Ready".
Note that Kratix only commits to your repository when there's actual work to do, so you won't see any commits yet.
Step 2: Registering your worker cluster
Now that Kratix is connected to Git, the next step is to register our worker cluster. This is where the actual workloads will run, and by registering it, we're essentially telling Kratix about our available clusters so it knows where to place resources for Flux to deploy.
Create the worker destination configuration file and apply it. In this demo, we called it worker-destination.yaml :
This configuration creates a destination named "worker-cluster" with an "environment: dev" label for targeting workloads, and connects it to the GitStateStore we configured earlier.
The commands below set up and verify the Git state store on the platform cluster.
You should see your destination registered:
Once you register the destination, Kratix will make its first commit to your Git repository, creating the directory structure for the worker cluster. This is when you'll see the destinations/worker-cluster/ paths appear in your repo.
The environment: dev label enables targeting workloads to specific cluster types using destination selectors.
Step 3: Setting up Flux on the worker cluster
Flux monitors the state store repository and deploys whatever Kratix commits to it. This creates the GitOps automation: Kratix generates manifests, commits them to Git, and then its job is done. Flux automatically reconciles the declared resources from there.
You'll see Flux components being installed:
Once ready, all Flux pods should show as running:
Configuring Flux to read from the state store
Since we already created Git credentials for Kratix, we can reuse those same credentials for Flux. Flux needs access to the same Git repository to monitor and apply changes, so we'll copy the existing secret to the flux-system namespace where Flux expects to find it.
The command below extracts the git-credentials secret from the kratix-platform-system namespace, modifies the namespace field to flux-system, and creates it in the worker cluster:
Create the Flux source configuration file:
This configuration connects Flux to your state store repository using the copied credentials and checks for changes every 10 seconds.
Kratix structures its output in a clear way. Dependencies, which are installed during Promise setup, are kept separate from resources, which are created on a per-request basis. This separation allows Flux to process them in the correct order:
The dependencies Kustomization installs operators and CRDs first, while the resources Kustomization waits for dependencies to be ready before deploying actual workload instances. This ensures operators are available before any resources that need them.
Once everything is configured, you should see both Kustomizations ready:
The dependency ordering ensures that operators and CRDs are installed before any resources that depend on them.
Step 4: Installing your first Promise
At this point, you have Kratix connected to Git and Flux monitoring your state store, but your platform does not yet offer any services. This is where Promises come in.
A Promise in Kratix is like an agreement between the platform team and developers. It serves as a template that outlines what developers can request, such as a Redis instance, and how the platform will fulfill their request.
The goal is to take away the complexity behind the scenes. Instead of worrying about Redis operators, CRDs, or cluster setup, developers only need to make a simple request, such as saying, “I need a large Redis instance.”
In this guide, we will use the Redis Promise from the Kratix Marketplace. It is easy to follow and practical enough to demonstrate the value it provides clearly.
Initially, the Promise will show as "Unavailable" while the promise-configure pipeline installs the Redis operator:
You can monitor the pipeline progress through events:
You should see the following output:
During this step, Kratix generates the Redis operator manifests and saves them in the dependencies/ folder of your Git repository. This makes sure the Redis operator is in place on the worker cluster before any Redis instances are created.
Once the promise-configure pipeline finishes, the Promise is ready to use:
Let's examine what this Promise provides to developers:
The output shows the API developers will use to request Redis instances:
The Redis Promise is refreshingly simple. When developers want a Redis instance, they create a resource request with just one field: size (either "small" or "large"). No complex configuration, no overwhelming options, just a clean abstraction that handles all the complexity behind the scenes.
When someone requests a Redis instance, the Promise runs a pipeline container that translates that simple request into all the Kubernetes manifests needed for a production Redis deployment. The platform team has encoded their best practices, security policies, and operational requirements into the Promise, ensuring that every Redis instance is deployed consistently.
Step 5: Testing the end-to-end GitOps workflow
Time to test the complete workflow. Let's request a Redis instance and observe it flowing through the pipeline.
Create the request file redis-request.yaml and submit the request:
Here's what happens across the clusters:
Platform cluster: Kratix validates your request and runs the instance-configure pipeline.
Platform cluster: The pipeline creates the Redis manifests and pushes them to Git.
Worker cluster: Flux detects the Git changes and deploys the Redis instance.
You can watch this process in real-time. Check your Git repository via the web console or use the command below:
You'll see commits from Kratix that show the GitOps workflow in action. Notice the specific commit for your Redis request:
The commit 86a1ab0 contains the manifests generated from the Redis request submission.
The generated manifests show how Kratix organises the output:
Then, verify the deployment on the worker cluster:
You'll see the Redis operator running on the worker cluster and your Redis instance deployed:
This confirms your Redis instance is fully deployed and operational on the worker cluster, managed entirely through GitOps without manual intervention.
Troubleshooting common issues
GitOps workflows require solid debugging capabilities. Here's how to troubleshoot the most frequent problems:
1. Promise installation issues: This happens when a Promise, which is like a blueprint for a service, isn't set up correctly. This means the platform cannot install what is needed for developers to request new services.
Here are some troubleshooting steps:
Check Promise status:
Look for failed pipeline pods:
Check logs for specific failures:
2. Git authentication problems: These issues occur when Kratix can't properly connect to your Git repository. It's usually because the credentials (username and password or token) used to access Git are incorrect or lack the necessary permissions.
Here are some troubleshooting steps:
Check GitStateStore status:
Look for authentication errors in Kratix logs:
3. Flux reconciliation problems: This problem indicates that Flux, which is responsible for deploying changes from Git to your worker cluster, is having trouble. It means that even though Kratix is committing changes to Git, Flux isn't able to apply those changes to your worker cluster.
Here are some troubleshooting steps:
Check Flux source status:
Look for reconciliation errors:
Force reconciliation for testing:
Note: You may occasionally see the resources Kustomization show as 'not ready' due to dependency timing checks with fast reconciliation intervals. This is normal Flux behaviour and doesn't affect functionality.
Your Git repository becomes a powerful tool for debugging. You can see precisely what Kratix generated and when, making it much easier to identify and resolve issues.
Scaling beyond simple services
Once you have Redis running with the basic workflow, you can begin exploring more advanced use cases. Kratix also supports compound Promises, which let you spin up a complete application stack from a single request.
For example, you could define a Promise called development-environment that sets up:
A PostgreSQL database
A Redis cache
A monitoring stack
An application runtime environment
Developers could request this complete environment with:
Kratix would orchestrate the creation of multiple sub-Promises, each handling a piece of the overall environment. Flux would deploy all the resources in the correct order across your clusters, and developers would get a complete, consistent development environment without needing to understand the underlying complexity.
Production considerations
Before rolling this out to your teams, consider several production readiness aspects:
Repository organisation: Structure your state store to support multiple environments and clear separation:
Security considerations:
Use dedicated service accounts with minimal permissions
Implement branch protection rules for your state store
Audit Promise pipeline containers before installation
Consider rotating GitHub tokens regularly
Monitoring and observability:
Set up alerts for:
Failed Promise workflows
Flux reconciliation errors
Unusual commit activity in your state store
Resource usage spikes from Promise pipelines
Transforming platform operations at scale
What you've built here extends far beyond automated Redis deployment. This is a foundation for true platform engineering at scale. Your developers get self-service infrastructure that's as easy to request as deploying an application. Your platform team gets standardised, auditable processes that reduce toil and eliminate configuration drift.
The combination of Kratix and Flux creates a powerful multiplier effect. Kratix handles the abstraction and orchestration that platform teams need, while Flux provides the reliable, GitOps-native deployment engine that teams already trust.
Start with simple services like Redis and PostgreSQL. Once your teams experience how seamless the workflow becomes, you'll see requests for more complex promises. That's when you can start building compound Promises that deliver complete development environments or application stacks with a single request.
Ready to get started? Install that Redis Promise and watch your first GitOps-delivered service come online. Then start thinking about what other manual processes you can eliminate through Promises.
For more advanced patterns and enterprise features, check out the Kratix documentation and explore Syntasso Kratix Enterprise for production-ready platform solutions.


Comments