How to Create a Kubeasy Challenge with Dev Mode
A complete guide for challenge creators: from idea to Pull Request using the kubeasy dev mode commands (v2.5.3+).
Paul BrissaudGot an idea for a Kubeasy challenge? A production incident you've been through, an RBAC config that always trips people up, a Job that fails silently? Here's how to turn it into a playable challenge in minutes — no account needed, no API, nothing to deploy online.
The dev mode is a dedicated subsystem of the Kubeasy CLI built for challenge creators. It lets you scaffold, test, and iterate locally against your Kind cluster, using the same tools you'd use in real production. No login, no OCI registry, no backend required.
Available since kubeasy-cli v2.5.3, the dev mode turns challenge contribution into a smooth workflow: create → lint → apply → validate → PR
Prerequisites
Before starting, you need:
- kubeasy CLI v2.5.3 or later installed
- Docker running (required by Kind)
- The local cluster initialized at least once:
kubeasy setup - The challenges repo forked on GitHub:
git clone https://github.com/<your-username>/challenges.git
cd challenges
git checkout -b challenge/memory-pressurekubeasy setup creates a Kind (Kubernetes IN Docker) cluster named kubeasy and installs the required infrastructure — Kyverno (policy engine) and a local volume provisioner.
Scaffolding — kubeasy dev create
Start by generating the challenge structure. dev create works in two modes: interactive (TTY prompts) or non-interactive (flags).
In interactive mode, the CLI walks you through name, type, theme, difficulty, and estimated time. The slug is auto-generated from the name. In non-interactive mode:
kubeasy dev create \
--name "Memory Pressure" \
--type fix \
--theme resources-scaling \
--difficulty easy \
--estimated-time 20 \
--with-manifestsThe --with-manifests flag generates starter deployment.yaml and service.yaml files. There are 3 challenge types:
The generated structure looks like this:
memory-pressure/
├── challenge.yaml ← metadata + validations
├── manifests/ ← initial cluster state
│ ├── deployment.yaml
│ └── service.yaml (if --with-manifests)
└── policies/ ← Kyverno policies (bypass protection)Anatomy of a good challenge.yaml
This is the central file. It holds metadata, the description, and the validations (called objectives).
title: "Memory Pressure"
type: "fix"
theme: "resources-scaling"
difficulty: "easy"
estimatedTime: 20
description: |
A data processing application deployed in production
has been restarting in a loop since this morning. The team
hasn't touched the code.
initialSituation: |
A pod is deployed in the namespace. It starts, runs for a
few seconds, then gets killed. It enters CrashLoopBackOff
and keeps restarting.
objective: |
Make the pod stable. Understand why Kubernetes is killing it.
objectives:
- key: pod-running
title: "Application Running"
description: "The pod must be in Ready state"
order: 1
type: condition
spec:
target:
kind: Pod
labelSelector:
app: memory-pressure
checks:
- type: Ready
status: "True"
- key: no-crashes
title: "Stable Operation"
description: "No crash or eviction events"
order: 2
type: event
spec:
target:
kind: Pod
labelSelector:
app: memory-pressure
forbiddenReasons:
- "OOMKilled"
- "Evicted"
sinceSeconds: 300description — Describes the symptoms, never the cause. The user must investigate.
initialSituation — What the user sees when they arrive: cluster state, deployed resources. No hints about the problem.
objective — The goal to achieve, not the method.
objectives — Validations that verify the solution. They must test the outcome, not the implementation.
⚠️ Hard rule: never reveal the cause in objective titles or descriptions.
The behavior also varies by type. For fix, the initial state is broken — manifests have an intentional bug, the user diagnoses and fixes. For build, the environment is empty or minimal — the user creates missing resources. For migrate, the initial state is working (v1 config) and the user must evolve it to v2 without breaking anything.
Writing the manifests
The files in manifests/ define the initial cluster state when a user starts the challenge. For a fix challenge, the manifest should reflect a realistic "going wrong" state:
# manifests/deployment.yaml — intentional bug: memory limits too low
apiVersion: apps/v1
kind: Deployment
metadata:
name: memory-pressure
spec:
replicas: 1
selector:
matchLabels:
app: memory-pressure
template:
metadata:
labels:
app: memory-pressure
spec:
containers:
- name: app
image: kubeasy/memory-hog:v1
resources:
limits:
memory: "10Mi" # too low — guaranteed OOMKilledOne problem at a time, realistic state (like an actual prod incident), internal comment about the bug (removed before PR).
Custom Docker Images
Some challenges require a broken application that doesn't exist as a public image — a process that eats memory, an API that returns wrong data, a server with a misconfigured TLS cert. For these cases, you can ship a custom Docker image directly in your challenge.
Just add an image/ directory with a Dockerfile at the root of your challenge. When kubeasy dev apply detects it, it automatically runs docker build, exports the image as a tar archive, and loads it directly into every node of the Kind cluster — no registry, no docker push.
memory-pressure/
├── challenge.yaml
├── manifests/
├── policies/
└── image/
├── Dockerfile
└── app.pyThe image tag is always <slug>:latest. Reference it in your manifest with imagePullPolicy: Never — without it, Kubernetes will try to pull from a registry and fail with ImagePullBackOff.
containers:
- name: app
image: memory-pressure:latest
imagePullPolicy: NeverExample — a memory hog that reliably OOMKills with limits set too low:
# image/Dockerfile
FROM python:3.11-slim
COPY app.py /app.py
CMD ["python", "/app.py"]# image/app.py
import time
data = []
while True:
data.append(" " * 10_000_000) # ~10MB per iteration
time.sleep(0.1)The image is rebuilt and reloaded on every kubeasy dev apply, including in watch mode.
Validate the structure — kubeasy dev lint
Before deploying anything, validate the challenge.yaml with the built-in linter. No cluster required.
kubeasy dev lint memory-pressureThe linter checks required fields, valid values (type, theme, difficulty), and objective structure. Always run it before deploying — it's fast and avoids unnecessary round-trips with the cluster.
Deploy and test
kubeasy dev apply memory-pressure --clean--clean deletes existing resources before redeploying — useful when iterating with modified manifests. Use dev status and dev logs to confirm the challenge is in the expected broken state:
kubeasy dev status memory-pressure # shows pods + recent events
kubeasy dev logs memory-pressure --followThen run the validations:
kubeasy dev validate memory-pressureFor rapid iteration, open two terminals — terminal 1 auto-redeploys on file changes, terminal 2 re-validates every 5 seconds:
kubeasy dev apply memory-pressure --watch # terminal 1
kubeasy dev validate memory-pressure --watch # terminal 2Or do everything in one command:
kubeasy dev test memory-pressure --clean --watchDev commands quick reference
Challenge design best practices
Kubeasy challenges are built on 4 principles:
- Realism over pedagogy — The challenge should feel like a real production incident, not a classroom exercise.
- Preserve the mystery — The description shows symptoms. Never the cause. The user investigates with
kubectl, logs, and events. - Autonomy first — The user solves with standard Kubernetes tools. No artificial constraints on the approach.
- Failure is learning — The environment is safe. Break things, start over. Validations give feedback without revealing the solution.
There are 5 validation types available:
Kyverno policies in policies/ prevent obvious bypasses. But don't over-constrain — users should be free to modify resource limits, add env vars, change probes, scale deployments.
Pre-PR checklist:
kubeasy dev lint passes with no errorskubeasy dev test --clean passes after applying the fixSubmitting your contribution
Once the challenge is tested and validated:
git add memory-pressure/
git commit -m "feat: add memory-pressure challenge"
git push origin challenge/memory-pressureOpen a Pull Request on github.com/kubeasy-dev/challenges. After merge, the CI/CD pipeline takes over: the challenge is built as an OCI artifact, published to ghcr.io/kubeasy-dev/challenges/memory-pressure:latest, and becomes available via kubeasy challenge start memory-pressure for all users.
Your challenge could be the next one played by thousands of developers learning Kubernetes.
Wrapping up
The full workflow fits in a single line: create → lint → apply → validate → PR. From a blank directory to a challenge playable by anyone in the world, everything happens locally, with tools you already know.
Dev mode was built so that contributing to Kubeasy feels as natural as writing code — no friction, no external dependencies, no waiting on a pipeline to find out your YAML is broken. Just you, your cluster, and an idea worth sharing.
If you want to discuss a challenge idea before diving in, the Kubeasy Slack is the right place. And if you're ready to go, the challenges repo is open — PRs welcome.
