Managing AWS Resources in Kubernetes with ACK
Kubernetes apps often require a number of supporting resources like databases, message queues, and object stores. AWS provides a set of managed services that we can use to provide these resources for our apps, but provisioning and integrating them with Kubernetes is usually complex and time-consuming. Traditionally what I have done in the past, I used Terraform for this, but this was disconnected from the application and required careful orchestration and/or ordering (aka you need to run Terraform first, then deploy the app next, then run terraform one more time, etc.)
ACK (AWS Controllers for Kubernetes) is an open-source project from AWS that was launched a few years ago to solve exaclty this problem. Besides ACK there are a few other solutions for this:
- Crossplane: An open-source Kubernetes project that allows you to manage cloud infrastructure and services declaratively. Unlike ACK, Crossplane provides a more cloud-agnostic approach and supports multiple cloud providers.
- Kubernetes service catalog: This is a native Kubernetes extension that enables integration with cloud services using the Open Service Broker API. However, it requires cloud providers to implement a broker.
- Terraform Kubernetes Operator: This approach uses Terraform to manage cloud resources through Kubernetes. It provides more flexibility but requires Terraform state management.
These are all good solutions, but let’s see where ACK shines:
- Tightly integrated with AWS: ACK is developed and maintained by AWS, ensuring compatibility and updates aligned with AWS services.
- Declarative Kubernetes approach: Resources are managed using Kubernetes manifests, aligning with Kubernetes-native workflows.
- IAM permissions management: Handles permissions automatically via IAM, making it secure and manageable.
Next, we are going to look at how to install and use ACK in an existing Kubernetes cluster. We are going to use the helm-based installation so you will need that available in case you don’t have it already.
Install ACK Controllers
ACK provides controllers for multiple AWS services. We can either install them one by one (only the one we intend to use), or we can install all available controllers. Here are the basic steps:
Add the ACK Helm repository:
helm repo add ack https://aws-controllers-k8s.github.io/helm-charts
helm repo update
Install all available ACK controllers:
helm install ack-all ack/ack-all --namespace ack-system --create-namespace
This will deploy controllers for all supported AWS services within the ack-system namespace.
If you want to install only specific controllers, you can replace ack-all with the name of the specific controller. For example, to install both the S3 and SQS controllers:
helm install ack-s3 ack/s3-controller --namespace ack-system --create-namespace
helm install ack-sqs ack/sqs-controller --namespace ack-system --create-namespace
ACK currently supports controllers for many AWS services:
- Amazon S3
- Amazon SQS
- Amazon SNS
- Amazon RDS
- Amazon DynamoDB
- Amazon ElastiCache
- etc.
To see the full list of supported services, see: https://aws-controllers-k8s.github.io/community/docs/community/services/
Next, let’s verify the installation:
kubectl get pods -n ack-system
and ensure all controllers you installed are up and running.
Great, let’s use ACK to create some resources to see how easy it is to do that:
Deploy an SQS Queue
Create a YAML manifest for the SQS queue:
apiVersion: sqs.services.k8s.aws/v1alpha1
kind: Queue
metadata:
name: test-queue
spec:
queueName: test-queue
Apply the configuration:
kubectl apply -f test-queue.yaml
Verify the queue creation:
kubectl get queues.sqs.services.k8s.aws
You should see test-queue in the output.
Deploy an S3 Bucket
Create a YAML manifest for the S3 bucket:
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
name: test-bucket-unique-name
spec:
name: test-bucket-unique-name
Apply the configuration:
kubectl apply -f test-bucket.yaml
Verify the bucket creation:
kubectl get buckets.s3.services.k8s.aws
You should see test-bucket-unique-name in the output.
And that’s it. It is as simple as that.
Note: normally these definitions would be part of the helm deploy of your application (in the same repository) and deployed by a common pipeline that deploys your application (using argo-cd or similar).