Building Platforms Using kro for Composition
- Abby Bangser
- 4 days ago
- 5 min read
Updated: 1 day ago
Amazon’s announcement of the new EKS Capabilities offering brings managed GitOps, cloud resource operators, and kro into a single, cohesive experience. In particular, the inclusion of Kube Resource Orchestrator (kro) is an exciting investment in a young, cross-cloud initiative that promises simpler, Kubernetes-native resource grouping. It is clear that Amazon sees the SIG Cloud Provider-backed initiative as a core part of the future of platform engineering.
This is a win for platform engineers. The composition of Kubernetes resources is becoming increasingly important as declarative Infrastructure as Code (IaC) tooling expands the number of objects we manage. For example, CNCF graduated project Crossplane, and the cloud-specific alternatives, such as AWS Controller for Kubernetes (ACK), which is packaged with EKS Capabilities.
With composition available as a managed service, platform teams can focus on their mission to build what is unique to their business but common to their teams. They achieve this by combining composition with encapsulation of all associated processes and decoupled delivery across any target environment.
The rise of Kubernetes-native composition
The core value of kro lies in the idea of a ResourceGraphDefinition. Each definition abstracts many Kubernetes objects behind a single API. This API specifies what users may configure when requesting an instance, which resources are created per request, how those sub-resources depend on each other, and what status should be exposed back to the users and dependent resources. kro then acts as a controller that responds to these definitions by creating a new user-facing CRD and managing requests against it through an optimised resource DAG. The simplified abstraction removes the need to juggle Helm, Kustomize, and hand-written operators while creating safe and standardised patterns.
The collaboration between and the investment across cloud vendors contributing to kro is a bright sign for our industry. However, challenges remain for end users adopting these frameworks. It can often feel like they are trapped in the “How to draw an owl” meme, where kro helps teams sketch the ovals for the head and body, but drawing the rest of the platform owl requires a big leap for the platform engineers doing the work.

Where kro fits in platform design
Effective platforms demonstrate results across three outcomes based on time to delivery:
Time for a user to get a new service they depend on to deliver their value
Time to patch all instances of an existing service or capability
Time to introduce a new business-compliant capability
Across the industry, we see platforms not only improving these metrics but fundamentally shifting beliefs about what is possible. Users are getting the tools they need to take new ideas to production in minutes, not months. A handful of engineers are managing continuous compliance and regular patching. Specialists bring their requirements directly to users without a central team bottleneck.
Universally successful platforms that deliver on these outcomes are designed around three principles:
Composition over simple abstraction
Composition enables teams to build from low-level components to high-value through common abstraction APIs. kro’s ResourceGroups offer a strong addition to existing approaches, such as Crossplane compositions, Helm charts, and Kratix Promises.
Encapsulation of configuration, policy, and process
Enterprise platforms must provide more than resources. They need clear ways to capture all the weird and wonderful (business-critical) requirements and processes they have built over the years. Yes, this can mean declarative code, but also imperative API calls, operational workflows that incorporate manual steps, legacy integrations with off-line systems, and, of course, interactions with non-Kubernetes resources. Safe composition depends on the ability to apply a single testable change that covers all affected systems.
Decoupled delivery across many environments
Organisations of sufficient scale and complexity need to support complex topologies, including multi-cluster Kubernetes and non-Kubernetes-based infrastructure. Platforms need to enable timely upgrades across their entire topology to reduce CVE risk while managing diverse and specialised compute, including modern options like GPUs and Functions-as-a-Service (FaaS), as well as legacy options such as mainframes or Red Hat Virtualization.
Achieving overall scalability, auditability, and resilience requires prioritising each in the proper context. Centralised planning gives control. Decentralised delivery allows scale. A platform should enable the definition of rules and enforcement in a central orchestrator, then rely on distributed deployment engines to deliver the capability in the correct places and form. This avoids the limits of tightly coupled orchestration and reduces the operational burden of scale.
kro is strong in the first principle. It offers a clear, Kubernetes-native composition that lets teams package complex deployments, hide unnecessary details, and encode organisational defaults. Features such as CEL templating show investment in helping engineers manage dependencies across Kubernetes objects when creating higher-level abstractions.
Where platforms need more than kro
It is important to acknowledge that kro does not aim to address the second or third principles. This is not a criticism. It reflects a focused scope, following the Unix philosophy of doing one thing well while integrating cleanly with the wider ecosystem.
kro is a powerful mechanism for packaging resource definitions and orchestrating them within a single cluster. It does not try to manage resources across clusters, handle workflows such as approvals, or integrate with systems such as ServiceNow, mainframes, or proprietary APIs that require imperative actions. The power comes from its Kubernetes-native design, which makes it easy to integrate with tools such as Karmada for scheduling, Kyverno for policy as code (PaC), and IaC controllers such as Crossplane.
The harder challenge is how to meet all three principles in a sustainable way. How can you make platform changes that are both quick and safe? The simplest answer is to enable encapsulated and testable packages that allow changes across infrastructure, configuration, policy, and process from a single implementation.
This is the piece of the puzzle that platform orchestration frameworks like Kratix contribute. Kratix provides a Kubernetes-native framework for delivering managed services that reflect organisational standards, with support for long-running workflows, integration with enterprise systems, and managed delivery to clusters, airgapped hardware, or mainframes. kro provides composition rather than orchestration, which allows these tools to complement each other.
Looking ahead at a growing ecosystem
The multi-vendor collaboration and Kubernetes SIG home demonstrates real momentum for kro. Each cloud provider recognises the value of a portable, Kubernetes-native model for grouping and orchestrating resources, and the importance of reducing manual dependency management for platform teams.
The next stage for organisations is understanding how kro fits into their broader architecture. kro is an important tool for composition. Ultimately, platform value comes from tying that composition to capabilities that encapsulate configuration, policy, process workflows, and decoupled deployment across diverse environments.
Emerging standards will help organisations meet the core tests of platform value: safe self-service, consistent compliance, simple fleet upgrades, and a contribution model that scales. With standards come tools that enable platform engineers to continue to reuse capabilities, collaborate more effectively, and deliver predictable behaviour across clusters and clouds.



Comments