Kubernetes now serves as the main infrastructure base, but YAML configuration creates more deployment problems than solutions. YAML files become harder to manage and understand as applications expand because they create maintenance problems during debugging. The small formatting errors in YAML will stop all your deployment processes from running and make it an unsafe solution for essential infrastructure. When teams deal with configuration problems, they spend less time working on application improvement and dependability enhancement.
This guide shows better solutions than using YAML to configure Kubernetes resources. You will discover how Kustomize, Helm, and JSON-based configs make resource management easier while minimizing human mistakes and enhancing maintainability. By the end of this guide, you will understand effective ways to replace YAML without losing efficiency in your Kubernetes system deployments.
Let’s start with some of the key challenges developers face when working with YAML in Kubernetes.
Why would you want to use Kubernetes without YAML?
Complexity
Kubernetes manifests start simple but grow complex as applications scale. Basic deployments require minimal YAML configurations, but managing multiple services, networking, security policies, and resource controls across clusters introduces significant challenges. Slight variations in environment settings further complicate maintaining best practices.
Verbosity
YAML files often repeat labels, annotations, and settings across multiple configurations, increasing redundancy and maintenance effort. Updating common values like image versions across numerous files creates a risk for inconsistencies and errors.
YAML formatting
Strict indentation rules make YAML prone to syntax errors, but incorrect spacing can also introduce subtle misconfigurations without triggering failures. Reviewing large sets of interdependent YAML files increases the difficulty of spotting these issues.
Alternatives to YAML for Kubernetes Configuration
YAML is the default format for Kubernetes configurations, but as applications grow, managing YAML files becomes increasingly difficult. Developers often spend more time fixing indentation issues, handling repeated values, and managing large manifests than actually deploying and scaling workloads. To solve these challenges, several tools provide alternatives that simplify Kubernetes configurations by introducing abstraction, templating, and built-in version control.
The following table highlights different use cases for Kubernetes configuration management and the tools best suited for each scenario:

These tools help reduce complexity in Kubernetes configurations:
- Kubectl is ideal for simple, direct deployments without additional setup.
- Kustomize allows modifying configurations without altering the base YAML, making it useful for environment-specific changes.
- Helm introduces templating and reusable charts, making it easier to manage complex applications and microservices.
By choosing the right tool, teams can move away from managing raw YAML files and adopt more structured, efficient ways to handle Kubernetes configurations.
Each tool serves a distinct purpose, depending on the level of abstraction and flexibility needed. Below, we break down how Kustomize and Helm simplify Kubernetes configurations and when to use them.
Kubectl
When creating small Kubernetes setups kubectl commands help you directly manage resources through commands instead of using configuration files. You can create all deployment elements and configure container resources plus service setup via this method. You can establish a deployment through basic commands and scale its performance instantly. You can use kubectl commands to test fast or fix problems, plus make temporary adjustments without changing YAML files. With kubectl, users can modify resource limits, update strategies, and change environment variables directly.
Dealing with many deployments becomes challenging when you use this method. Using commands to modify data is hard to monitor, which makes it challenging to maintain standardisation between all environments. You need to perform manual work for each update since kubectl lacks version control features. Editing settings directly from commands helps with fast changes, but using files to declare settings creates a better way to maintain stability over time.
Kustomize
Kustomize works as a Kubernetes-native tool to edit resource files without touching the base YAML manifests. It presents the customization concept to describe how you adjust current Kubernetes resources. Through overlays, Kustomize enables users to modify resources for distinct environments without creating separate YAML files. Using this method makes configuration management simpler while cutting duplications and keeping all stages consistent. You can use Kustomize through the kubectl command because the tool integrates directly with it.
Helm
Helm simplifies Kubernetes deployments using charts, which package related resources into reusable templates. A Helm chart consists of YAML templates that help install Kubernetes resources, including Deployments, Services, and ConfigMaps. Users can use values.yaml file to store their configurations because Helm renders Kubernetes manifests dynamically based on defined templates and values. The system provides chart repositories to help teams share and distribute Kubernetes applications across the team. Helm provides automatic tools to assist teams when handling sophisticated applications.
How Can You Use JSON for Kubernetes Configurations?
Kubernetes primarily uses YAML to define resources, but JSON is also supported. Choosing between YAML and JSON depends on how structured, readable, and automation-friendly you want your configurations to be.
How Kubernetes Handles JSON-Based Configurations
To understand how JSON fits into Kubernetes, let’s look at a typical workflow. The diagram below shows how a JSON-based configuration is processed when deploying applications in Kubernetes.
- A developer writes a JSON file defining the required Kubernetes resources.
- The JSON file is applied using kubectl apply, which sends an API request to the Kubernetes API server.
- The Kubernetes Scheduler places pods on worker nodes.
- The pods then run the application as per the defined configuration.

While YAML is the default, JSON can be useful in certain cases, especially when dealing with automation, structured data, or environments where strict syntax validation is required.
Kubernetes without YAML: The JSON alternative
YAML and JSON both define Kubernetes resources, but they serve different needs. YAML is preferred for its readability, while JSON is stricter and integrates better with automation tools. Here’s a quick comparison:

JSON is useful when strict formatting is required or when configurations are generated dynamically via scripts. However, YAML’s readability makes it the standard for Kubernetes.
Example: Kubernetes Deployment in JSON
A basic Kubernetes Deployment written in JSON:
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "json-dev"
},
"spec": {
"replicas": 2,
"selector": {
"matchLabels": {
"app": "json-app"
}
},
"template": {
"metadata": {
"labels": {
"app": "json-app"
}
},
"spec": {
"containers": [
{
"name": "json-container",
"image": "nginx",
"ports": [
{
"containerPort": 80
}
]
}
]
}
}
}
}
Now, let's apply the deployment using the following command:
kubectl apply -f deployment.json
Once applied, we can verify the deployment status with:
kubectl get deployments
This will return an output similar to the following:
NAME READY UP-TO-DATE AVAILABLE AGE
json-dev 2/2 2 2 5s
This confirms that the deployment is successfully created, with two replicas running and available in the cluster.
Converting YAML to JSON
If you have an existing YAML file and need to convert it into JSON, you can use yq, a command-line YAML processor. Unlike some built-in tools, yq is an external utility designed specifically for parsing and converting YAML. It does not come pre-installed with most systems, so you need to install it separately.
To install yq, you can use package managers like Homebrew on macOS and Linux or Chocolatey on Windows.
For macOS and Linux, you can use brew install yq, and for Windows, you can use choco install yq
After installing, you can convert an existing YAML file to JSON using yq:
yq eval -o=json deployment.yaml > deployment.json
This will create a deployment.json file in your project. Current working directory, which can be used later to create a deployment.
How Do You Get Started with Kustomize?
Kustomize helps you modify and update Kubernetes configurations by working on separate files instead of the primary YAML documents. You can apply different types of modifications to your Kubernetes resource files while keeping the base configuration intact.
Here, we will be using Kustomize to simplify application configuration management by applying reusable modifications without altering the base YAML files. Specifically, we will deploy an Nginx-based application using a Deployment resource.
To begin, we will first install Kustomize. Using the following terminal commands, we can install it.
choco install kustomize # Windows (PowerShell)
If you are using macOS or Linux, you can use the Homebrew package manager.
brew install kustomize
After installing, create a deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
spec:
replicas: 2
selector:
matchLabels:
app: kustomize-app
template:
metadata:
labels:
app: kustomize-app
spec:
containers:
- name: kustomize-container
image: nginx
Now, we create a kustomization.yaml, which defines the resources to be managed by Kustomize. Create the file as follows:
resources:
- deployment.yaml
After defining the kustomization.yaml file, we apply the configuration to the Kubernetes cluster using:
kubectl apply -k .
After execution, Kubernetes confirms the deployment.
deployment.apps/kustomize-app created
Now, to verify the deployment, we run:
kubectl get deployments
This returns:
NAME READY UP-TO-DATE AVAILABLE AGE
kustomize-app 2/2 2 2 9s
Kustomize successfully managed the deployment at this point. Any updates can now be applied without modifying the base file.
Updating Image Versions
Instead of manually editing deployment.yaml, use a patch:
Create patch.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
spec:
template:
spec:
containers:
- name: kustomize-container
image: nginx:latest # Updated image
Add the patch to kustomization.yaml:
resources:
- deployment.yaml
patchesStrategicMerge:
- patch.yaml
Apply changes using kubectl apply -k .
How Do You Use Helm to Manage Kubernetes Resources?
Helm simplifies Kubernetes deployments by using pre-configured charts, reducing the need for raw YAML configurations.
Installing Helm
To get started, install Helm using the following command:
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Verify the installation:
helm version
This returns:
version.BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf57010b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4"}
Adding and Updating the Helm Repository
Next, add the Bitnami Helm repository and update the latest charts:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
After running this, Helm confirms the update:
Hang tight while we grab the latest from your chart repositories...
Successfully got an update from the "ingress-nginx" chart repository
Successfully got an update from the "bitnami" chart repository
Update Complete. 🎉Happy Helming!🎉
Deploying an Application with Helm
Now, install an Nginx deployment using Helm:
helm install dev-nginx bitnami/nginx
To check if the deployment was successful, list the deployed charts:
helm list
This returns:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dev-nginx default 8 2025-02-24 00:00:24.3980996 +0530 IST deployed nginx-19.0.0
Modifying and Rolling Back a Deployment
To demonstrate Helm’s flexibility, we will modify the deployment by scaling it from three replicas to five:
helm upgrade dev-nginx bitnami/nginx --set replicaCount=5
We can confirm the change by listing the running pods:
kubectl get pods
Output:
If five replicas are not needed, we can roll back to the previous version with:
helm rollback dev-nginx 1
After a successful rollback, we can check Helm’s history to verify the change:
helm history dev-nginx
This logs all changes:
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Sun Feb 23 23:12:18 superseded nginx-19.0.0 1.27.4 Install complete
2 Sun Feb 23 23:13:20 superseded nginx-19.0.0 1.27.4 Upgrade complete
3 Sun Feb 23 23:20:51 superseded nginx-19.0.0 1.27.4 Rollback to 1
From this, we can see that in revision 3, the deployment was upgraded from three to five replicas, and later, we rolled it back to the previous state.
To confirm that the rollback successfully reverted the replica count to three, we check the running pods again:
kubectl get pods
NAME READY STATUS RESTARTS AGE
dev-nginx-d9b679778-247vh 1/1 Running 0 11h
dev-nginx-d9b679778-26596 1/1 Running 0 11h
dev-nginx-d9b679778-1jgh6 1/1 Running 0 11h
dev-nginx-d9b679778-32yt4 1/1 Running 0 11h
dev-nginx-d9b679778-vvtyu 1/1 Running 0 11h
This confirms that the deployment is back to three replicas.
Managing Kubernetes Deployments Without Manifests
For small-scale deployments, Kubernetes provides direct commands through kubectl. This approach is quick and efficient for simple applications but becomes difficult to manage as deployments scale.
This section walks through creating a deployment, modifying replicas, updating configurations, and handling service exposure using direct commands.
Creating a Deployment
To start, deploy an NGINX application with three replicas using:
kubectl create deployment nginx-deployment --image=nginx --replicas=3
Kubernetes confirms the deployment creation:
deployment.apps/nginx-deployment created
At this point, three identical pods running the NGINX container are provisioned.
While managing deployments via direct commands is convenient, it comes with a few drawbacks:
- No Persistent Configuration – The deployment is not stored in a reusable YAML file.
- Difficult to Track Changes – Modifications require additional commands instead of version-controlled files.
- Limited Customization – Resource constraints, environment variables, and other settings must be manually adjusted.
Scaling the Deployment
Once deployed, you may need to scale the number of replicas up or down based on workload demands.
To increase replicas to five, run:
kubectl scale deployment nginx-deployment --replicas=5
After scaling, verify the running pods with:
kubectl get pods
The output shows five running pods:
NAME READY STATUS RESTARTS AGE
nginx-deployment-29czk 1/1 Running 0 8m14s
nginx-deployment-94xsd 1/1 Running 0 8m14s
nginx-deployment-C2kdd 1/1 Running 0 6m57s
nginx-deployment-1f9hw 1/1 Running 0 6m57s
nginx-deployment-5hwZx 1/1 Running 0 6m57s
If fewer replicas are needed, scale the deployment down to two:
kubectl scale deployment nginx-deployment --replicas=2
Check the updated replica count:
kubectl get pods
This confirms that only two pods are running:
NAME READY STATUS RESTARTS AGE
nginx-deployment-29czk 1/1 Running 0 8m14s
nginx-deployment-94xsd 1/1 Running 0 8m14s
Scaling Considerations in Hybrid Environments
In environments where Kubernetes runs alongside virtual machines, resource scaling becomes even more critical. Ensuring efficient workload distribution between Kubernetes nodes and VMs requires continuous monitoring.
This is where the metrics server plays an important role. It collects CPU and memory usage data from pods and nodes, enabling Kubernetes to make informed scaling decisions dynamically.
Using direct commands is suitable for quick setups but becomes unmanageable for large-scale applications. Integrating tools like the metrics server helps optimize scaling, making workload management more efficient.
Best Practices for Managing Kubernetes Without YAML
As Kubernetes evolves, the focus is shifting toward making configurations more maintainable, scalable, and testable. Here are some best practices to follow when using kubernetes tools and alternative configuration tools like Kustomize, Helm, and JSON.:
Use automated Tools
Tools like Kustomize and Helm help avoid the manual management of raw Kubernetes manifests in YAMLs and JSONs. These tools simplify configuration management, reduce duplication, and enable dynamic updates across various environments..
Monitor and Validate
Before applying changes, use kubectl diff to check what’s being modified and catch mistakes early. Once deployed, tools like Prometheus and Grafana help monitor resource usage, errors, and performance issues. Regular audits ensure configurations stay consistent and up to date. Adjustments should be made based on real-world usage and feedback to keep the setup reliable and easy to manage.
Keeping JSON Configurations Organized
When using JSON for Kubernetes configurations, maintaining structure and clarity is essential. Instead of defining everything in a single file, it's best to keep Deployments, Services, and ConfigMaps separate. This makes updates easier, and if you need to change a replica count or update an image, you only modify the relevant file without affecting other configurations. Organizing JSON this way improves readability and ensures that changes are isolated, reducing the risk of unintended errors. Using tools like JSON Schema helps enforce consistency by validating fields and data types before applying configurations. This prevents common mistakes and ensures that all configurations follow a standard format. Additionally, automation tools can generate or update JSON files dynamically, making it easier to manage large-scale deployments while keeping configurations structured and maintainable.
Now, managing Kubernetes workloads requires dealing with complex configurations - defining deployments, services, scaling policies, and networking rules, usually in YAML. While Kubernetes provides flexibility, it forces developers to spend more time maintaining infrastructure than building applications.
- YAML is fragile – A single indentation error can break deployments.
- Scaling is manual – Autoscaling requires custom configurations, metric servers, and policies.
- No easy rollback – If an update breaks a service, rolling back is tedious without Helm or GitOps.
- Resource allocation is a challenge – Setting CPU and memory limits often leads to over-provisioning or under-utilization.
As teams scale, these issues only grow. Instead of focusing on their core applications, engineers get stuck debugging YAML and managing infrastructure.
Kapstan: Kubernetes Without the Headache
Kapstan removes this complexity by offering a pre-configured Kubernetes environment that automates infrastructure management without YAML. Developers can deploy services, configure resources, and manage scaling through an intuitive UI, eliminating the need for manual Kubernetes manifests. Services can be deployed with a single click, and rollbacks are just as simple - no need to troubleshoot YAML errors or manually revert configurations. Instead of writing autoscaling policies, Kapstan dynamically adjusts CPU and memory based on real-time workload demand, preventing over-provisioning and ensuring efficient resource usage.
Managing deployments becomes easier with Kapstan’s centralized dashboard, where teams can view all running services, check status updates, and troubleshoot issues—without touching YAML. The Services tab provides an overview of all workloads, making it easy to track provisioning, deployment failures, and running services.

When debugging issues, Kapstan provides real-time service utilization metrics, including CPU and memory consumption, alongside deployment history. Developers can immediately see if a service is consuming excessive resources or facing restart issues, without needing to check Kubernetes logs manually.

Resource allocation is pretty simple. Instead of editing YAML files to set CPU and memory limits, developers can configure resources directly from Kapstan’s UI. This ensures workloads receive the required resources without unnecessary over-provisioning.

Kapstan simplifies Kubernetes management by removing YAML complexity, automating scaling, and providing a UI-driven approach to deployment and resource configuration. Instead of spending hours managing infrastructure, teams can focus on delivering applications faster, with better efficiency and fewer deployment headaches.
For teams looking to scale Kubernetes without dealing with YAML, Kapstan provides the best balance of automation, flexibility, and control. Learn more at Kapstan’s documentation.
Conclusion
While YAML remains the default in Kubernetes, it’s not the only way to define configurations. Tools like Kustomize, Helm, and JSON-based alternatives provide better maintainability and reduce errors. Choosing the right tool depends on the complexity of the system, your environment, and your team's workflow.
FAQs
Can I use JSON instead of YAML in Kubernetes?
Yes, Kubernetes supports JSON, though YAML is more commonly used.
Is there a way to manage Kubernetes without writing configurations?
Yes, platforms like Kapstan provide managed Kubernetes environments where configurations are abstracted away.
Where are Kubernetes YAML files stored?
Kubernetes doesn’t store YAML files by default. You save them locally before applying them with kubectl apply -f <file>. Inside the cluster, the deployment details are stored in etcd, not as YAML.
Can YAML files have spaces?
Yes, YAML uses indentation with spaces (not tabs). Incorrect spacing will break the file, so always use consistent indentation.
What is kind in Kubernetes YAML?
kind specifies the type of Kubernetes resource being defined, like Deployment, Service, or Pod. It tells Kubernetes what to create or manage.