Reduce Kubernetes Cluster Costs on AWS
When you spun up a Kubernetes cluster for your own personal blog, you are on a tight budget. Kubernetes cluster can blow a hole in your pocket if you don’t pay attention to the AWS services it uses under the hood.
For this blog, the Kubernetes cluster was provisioned using kops. There are settings you could change only during creation and there are also things you could change post creation. For the former point, you better plan it early before you create the cluster.
Mainly there are 2 things that you need to pay attention for costs reduction.
- EC2 instances.
- EBS volumes.
EC2 Instances
The minimum number of instances you need to run a proper cluster is 3 instances, 1 master and 2 nodes. By default, the EC2 instance will be an on-demand instance so to reduce the cost, you need to change to AWS Spot instances. For this, you have to explicitly set the maximum price for your master and nodes instances. This settings can be done either during creation or post creation.
New Cluster
If you want to set it during cluster creation, you will need to run few commands.
First you will need to generate the cluster yaml file.
Then edit the k8s.blog.taufek.dev.yml
file by adding maxPrice: "#.##"
for
both master and nodes settings.
Then run below command to create the cluster
Note: Before running above to create the cluster, you might want to read EBS
Volumes
section below because part of the setting is not easily change after
the cluster is created.
Existing Cluster
If you already have a running cluster, you can edit the cluster settings. First, run below command to open up the Cluster object.
Then it will open up similar cluster yaml file in creation step above. Add the
maxPrice
fields and save the file.
Then run below to push the change to your cluster.
Run below to know when your cluster is ready
EBS Volumes
kops
provisions 1 volume for each instances in your cluster and 2 volumes for
etcd cluster. In my 1 master and 2 nodes cluster, I will be given 5 volumes.
By default, the volumes size are crazily big, at least too big for running my
blog site. By default, master volume size is 64G, nodes is 128G and etcd is
2x20GB volumes.
For master and nodes volume size, they can be change during creation and post
creation. Unfortunately, kops
, does not allow to change etcd volume size
during creation.
New Cluster
First you will need to generate the cluster yaml file.
For master and nodes volume, you could specify via the command line above but for etcd cluster, you will need to modify the cluster yaml file manually.
Then edit the k8s.blog.taufek.dev.yml
file.
Then run below command to create the cluster
Existing Cluster
Run below command to open up the Cluster object.
Within the Cluster object, set below to configure master and nodes volumes size:
Then run below to push the change to your cluster.
For etcd volumes, you can’t use kops
to modify the volume size. But there are workaround
mentioned in this article. I personally used this
workaround to reduce my etcd cluster volumes.
Conclusions
Before I carried out above cost optimization steps, below were the Cluster setup:
- 1 master instance with on-demand instance
- 2 nodes instances with on-demand instance
- 1 master EBS volume with 64G volume size
- 2 nodes EBS volume with 128G volume size each
- 2 etcd EBS volume with 20G volume size each
Now the Cluster setup looks like below:
- 1 master instance with spot instance (up to 90% price reduction)
- 2 nodes instances with spot instance (up to 90% price reduction)
- 1 master EBS volume with 8G volume size
- 2 nodes EBS volume with 8G volume size each
- 2 etcd EBS volume with 8G volume size each
At the point of writing this post, my Cluster is just 2 days old. I will update this space about the monthly costs once I have to pay the AWS bill at the end of this month.