The first article in this series covered the theory behind the emergence of internal platforms, platform teams, and paving golden paths.
Now you will have the chance to pave your own golden path using Kubernetes clusters as the infrastructure. We have chosen Kubernetes because it has become a commodity, and is available from the big clouds (Amazon, Azure, Google) and on-prem/software players (VMware, RedHat, Rancher).
To pave your golden path you will build a platform across Kubernetes clusters using Kratix as a framework. You will then populate your platform with the set of Kratix Promises required to pave your golden path. Each Promise is a definition of something as-a-Service, for example databases as-a-Service, webservers as-a-Service, identity as-a-Service, etc. Promises can call other Promises to pave golden paths.
The first task, as a member of a platform team, is to collaborate with the application teams you're serving to understand what their highest priority needs are. For the example here, imagine the application teams are producing container images for their applications, backed by relational data. They’re happy to leave all settings and parameters to the platform team (you); what they want is application serving, with database access, on-demand.
You select Knative for application serving, and Postgres for relational data. You could choose to deploy Promises to Kratix for each of these–one for Knative and one for Postgres–and leave it to the application teams to get them working together. Given that you're a customer-centric platform team, you're hoping to lower their cognitive load by giving them a single Promise for the complete solution: Knative and Postgres working together.
Start by deploying Kratix so you have a framework for adding Promises, as outlined beneath.
Next - follow these commands to clone the Kratix GitHub repository and start two clusters, one for Kratix’s Promise API (platform cluster), and the other to run the workloads for your application teams (worker cluster). You can add more worker clusters later if you need to.
git clone https://github.com/syntasso/kratix.git
cd kratix
./scripts/quick-start.sh
./scripts/prepare-platform-cluster-as-worker.sh
The Kratix API is now available on the platform cluster.
kubectl --context=kind-platform get crds | grep promise
promises.platform.kratix.io 2022-08-17T17:09:43Z
Now you need to install your Promises to populate your platform with valuable services for your customers. As mentioned, you could first install a Knative Promise, and then a Postgres Promise, to enable your application teams to build out their own environments from those two resources. Instead we've placed those two Promises inside of one “paved path” Promise - giving one-request access to Knative+Postgres environments on-demand. Golden! Let’s install the paved-path Promise in your Kratix deployment:
kubectl --context kind-platform apply \
--filename samples/paved-path-demo/paved-path-demo-promise.yaml
You have now taught your Kratix-based platform how to provide paved paths (Knative with Postgres) as-a-service.
kubectl --context=kind-platform get crds | grep paved
paved-path-demos.example.promise.syntasso.io 2022-08-17T17:11:54Z
Once your Promise deployment has converged you should see the Postgres Operator, defined within the Postgres Promise, starting on the worker cluster.
kubectl --context kind-worker get pods --watch
NAMESPACE NAME READY STATUS RESTARTS AGE
default postgres-operator-7dccdbff7c-5qc6m 1/1 Running 0 45s
You are now ready to deliver your paths to your customers.
Let’s change hats from the platform team to a member of an application team, and leverage the platform. Without the platform, you would have to build your own Knative and Postgres environments - deploying, managing, and securing complex distributed services will just slow your team down. You want to get on with building great applications! Luckily the platform provides exactly the environments your team needs - approved, governed, and ready to work in your organisation - paving your golden path to production. Best of all, you don’t need to open a ticket and wait months for an environment; you can make one API call and an environment will be created.
kubectl --context kind-platform apply \
--filename samples/paved-path-demo/paved-path-demo-resource-request.yaml
You can now watch the pods on your worker cluster, and see the Postgres and Knative environment being created.
kubectl --context kind-worker get pods -A --watch
Let’s change hats back to the platform team. Your application teams are now able to create the environments they need, on-demand, as-a-service. What’s their next pain point where you can help out? They’re wasting their time deploying and managing CI/CD servers, specifically Jenkins. They need to be able to create Jenkins servers on-demand, when they need them. Let’s add Jenkins-as-a-service to your platform.
kubectl --context kind-platform apply \
--filename samples/jenkins/jenkins-promise.yaml
You have now taught your Kratix-based platform how to provide Jenkins as-a-service.
kubectl --context=kind-platform get crds | grep jenkins
jenkins.example.promise.syntasso.io 2022-08-17T17:11:54Z
Switching roles back to the application team, you're delighted to hear that your organisation’s platform now offers CI/CD-as-a-service (Jenkins). Your platform team learns fast! Let’s spin up a Jenkins server and use it to deploy your application.
kubectl --context kind-platform apply \
--filename samples/jenkins/jenkins-resource-request.yaml
You can watch your Jenkins being scheduled and deployed to the worker cluster.
kubectl --context kind-worker get pods -A --watch
NAMESPACE NAME READY STATUS RESTARTS AGE
default jenkins-example 1/1 Running 0 2s
Note: this may take a short while to start.
As this is a small sample local environment without credential management or gateway/mesh networking, let’s connect directly to Jenkins and log in with default credentials.
kubectl --context kind-worker port-forward jenkins-example 8080:8080
Navigate to http://localhost:8080/ and login with the credentials you copy from the below commands.
(In a new console)
Username:
kubectl --context kind-worker \
get secret jenkins-operator-credentials-example \
-o 'jsonpath={.data.user}' | base64 -d
Password:
kubectl --context kind-worker \
get secret jenkins-operator-credentials-example \
-o 'jsonpath={.data.password}' | base64 -d
Once you’ve logged in, create a pipeline using the Jenkinsfile available here.
If you’re unfamiliar with Jenkins's GUI, here’s a video to show the steps involved to create the pipeline.
Build the pipeline; when the pipeline runs successfully the “todo” application should be deployed.
kubectl --context kind-worker get services.serving.knative.dev
NAME URL LATESTCREATED LATESTREADY READY REASON
todo http://todo.default.example.com todo-00001 todo-00001 True
Let’s test your application out.
kubectl \
--context kind-worker \
--namespace kourier-system \
port-forward svc/kourier 8081:80
(In a new console)
curl -H "Host: todo.default.example.com" localhost:8081
<!DOCTYPE html>
… (rest of page)
The platform team has paved a golden path for the application teams, and an application team has used the path to deploy and run their “to-do” application. Success!
You can continue to iterate from this point by collaborating and learning with the application teams to add/refine Promises to make their lives easier. Perhaps you could create a Promise that delivers three Knative+Postgres environments (dev/stage/prod) and a Jenkins server to coordinate them, with a pipeline pre-installed. Perhaps you could add extra collaborating services as Promises, or deliver public cloud services via a Crossplane Promise. The possibilities are endless, and within each Promise’s pipeline you can encode your organisation’s security, governance, and compliance requirements whilst keeping the API offered by the Promise to the platform team simple and coherent. To get a deeper understanding of how Promises work, take a look at how to write a Promise.
Once you're done with the Kratix environment, you can clean up your installation by removing the KinD clusters:
kind delete clusters platform worker
Summary
We started with why: why should organisations provide internal platforms, and why, in many cases, those internal platforms should provide paved golden paths to production (see Paving Golden Paths on Multi-Cluster Kubernetes: Part 1 (The Theory) ).
For the worked example, you first played the role of a platform team member, and deployed Kratix on Kubernetes to provide a framework for your platform. You then composed the platform by adding Promises to meet the needs of your organisation.
Switching role to an application team member, you consumed the paved path Promise and used a Promise for a CI/CD server to deploy your application.
Application teams are able to productively, securely, and efficiently deliver their software to production using golden paths paved by the platform team.
Want to learn more?
Click below to book time with our team or try out our workshop.
Platform Review
A free 1-hour session with experts to review your platform.
Engineering Collaboration
A free 1-hour engineering session to build your first Kratix Promise.
Build Your Platform A self-paced workshop that will enhance your platform engineering skills with Kratix.
Comments