Skip to content

k8tre/k8tre-aws

Repository files navigation

K8TRE AWS base infrastructure

Lint

Deploy AWS infrastructure using Terraform to support K8TRE.

Prerequisites

  • Administrator access to an AWS account.
  • Ideally you should have access to a domain name to setup a wildcard host, e.g. *.k8tre.example.org.
  • For production we strongly recommend you have an AWS Organisation with security policies and guardrails or equivalents.

First time

You must first create a S3 bucket to store the Terraform state file. Activate your AWS credentials in your shell environment, edit the resource.aws_s3_bucket.bucket bucket name in bootstrap/backend.tf, then:

cd bootstrap
terraform init
terraform apply
cd ..

Deploy Amazon Elastic Kubernetes Service (EKS)

By default this will deploy two EKS clusters:

  • k8tre-dev-argocd is where ArgoCD will run
  • k8tre-dev is where K8TRE will be deployed

IAM roles and pod identities are setup to allow ArgoCD running in the k8tre-dev-argocd cluster to have admin access to the k8tre-dev cluster.

Configuration

Edit provider.tf. You must modify terraform.backend.s3 bucket to match the one in bootstrap/backend.tf.

You can install K8TRE AWS with no changes, but you will most likely want to set some variables. Either modify variables.tf, or copy overrides.tfvars-example to overrides.tfvars and edit.

Particularly important variables include

  • dns_domain: The domain for K8TRE, e.g. k8tre.example.org
  • request_certificate: K8TRE requires a HTTPS certificate to be stored in AWS ACM, set this to acm to request a proper certificate instead of using a self-signed one.
  • number_availability_zones: By default the deployed clusters run in a single availability zone to make it easier to deal with ReadWriteOnce persistent volumes which are backed by EBS volumes, which are tied to a single AZ. Increasing this provides more resilience to AWS outages, at the expense of needing more nodes in all AZs since once an EBS volume for a pod has been provisioned that pod can only ever be run in that AZ.

Run Terraform

Activate your AWS credentials in your shell environment. Terraform must be applied in several stages. This is because Terraform needs to resolve some resources before running, but some of these resources don't initially exist.

Initialise Terraform providers and modules:

terraform init

Deploy the EKS cluster control plane, a Route 53 Private Zone, EFS, and HTTPS certificate:

terraform apply -var-file=overrides.tfvars -var deployment_stage=0

If you set request_certificate = "acm" then create the DNS validation records shown in the output.

Deploy EKS compute nodes, and Cilium:

terraform apply -var-file=overrides.tfvars -var deployment_stage=1

Deploy ArgoCD and some other prerequisites:

terraform apply -var-file=overrides.tfvars -var deployment_stage=2

Deploy K8TRE

terraform apply -var-file=overrides.tfvars -var deployment_stage=3

If any commands file or timeout try rerunning them.

K8TRE secrets

K8TRE requires several secrets in AWS SSM, such as credentials for applications. You can use the create-ci-secrets.py script in the K8TRE repository to create them:

uv run create-ci-secrets.py --backend aws-ssm --region eu-west-2

Kubernetes access

terraform apply should display the command to create a kubeconfig file for the k8tre-dev and k8tre-dev-argocd clusters.

ArgoCD access

For convenience you can run the ./argocd-portforward.sh to start a port-forward to the ArgoCD web interface. Open http://localhost:8080 in your browser and login with username admin and the password displayed by the script.

If any Applications are not healthy check them, and if necessary try forcing a sync, or forcing broken resources to be recreated.

K8TRE access

K8TRE will setup a private Route53 DNS zone and configure the K8TRE VPC to use it. Either

  • Create an EC2 desktop instance or workspace attached to the VPC and connect to https://portal.k8tre.example.org
  • Create an external application load balancer, see #32

K8TRE deployment overview

K8TRE deployment overview

This deployment requires you to have administative access to an AWS Account, but assumes your AWS organisation and your DNS infrastructure are managed by a separate entity from the one deploying K8TRE.

It does not attempt to configure anything outside this single AWS account, nor does it configure any public DNS. We recommend you use an ACM managed public certificate. This deployment can request a certificate for you, but you must setup the DNS validation records yourself. Once this is done you can proceed with deploying K8TRE, and the internal Application Load Balancer created by K8TRE should automatically use the certfiicate.

EKS is deployed in a private subnet, with NAT gateway to a public subnet. By default the cluster has a single EKS node group in a single subnet (single availability zone) to reduce costs, and to avoid multi-AZ storage. EKS Pod Identities allow specified Kubernetes service accounts to access AWS APIs.

A prefix list ${var.cluster_name}-service-access-cidrs is provided for convenience This is not used in any Terraform resource, but can be referenced in other resources such as Application load balancers deployed in EKS.

AWS Organisation

This repository only manages the K8TRE infrastructure for a single AWS account.

We strongly recommend you setup a multi-account AWS Organisation, for example using AWS Control Tower or Landing Zone Accelerator on AWS. This organisation should include monitoring and security tools, either using AWS services or a third party alternative.

For example: Example K8TRE AWS organisation

Developer notes

To debug Argocd inter-cluster auth:

kubectl -nargocd exec -it deploy/argocd-server -- bash

argocd-k8s-auth aws --cluster-name k8tre-dev --role-arn arn:aws:iam::${ACCOUNT_ID}:role/k8tre-dev-eks-access

Linting

When making changes to this repository run:

terraform validate
prek run -a

prek (or pre-commit) will run some autoformatters, and TFlint.

Autogenerated documentation

Modules

Name Source Version
certificate ./certificate n/a
dnsresolver ./dnsresolver n/a
efs ./efs n/a
k8tre-argocd-eks ./k8tre-eks n/a
k8tre-eks ./k8tre-eks n/a
vpc terraform-aws-modules/vpc/aws 6.6.0

Inputs

Name Description Type Default Required
additional_admin_principals Additional EKS admin principals map(string) {} no
allowed_cidrs CIDRs allowed to access K8TRE ('myip' is dynamically replaced by your current IP) list(string)
[
"myip"
]
no
argocd_version ArgoCD Helm chart version string "9.4.15" no
create_public_zone Create public DNS zone bool false no
deployment_stage Multi-stage deployment step.
This is necessary because Terraform needs to resolve some resources before
running, but those resource amy not exist yet.
For the first deployment you must step through these starting at
'-var deployment_stage=0', then '-var deployment_stage=1'.
Future deployment can use the highest number (default).
number 3 no
dns_domain DNS domain string "k8tre.internal" no
efs_token EFS name creation token, if null default to var.name string null no
enable_github_oidc Create GitHub OIDC role bool false no
install_k8tre Install K8TRE root app-of-apps bool true no
k8tre_cluster_label_overrides Additional labels merged with k8tre_cluster_labels and applied to K8TRE cluster map(string) {} no
k8tre_cluster_labels Argocd labels applied to K8TRE cluster map(string)
{
"environment": "dev",
"external-dns": "aws",
"secret-store": "aws",
"skip-metallb": "true",
"vendor": "aws"
}
no
k8tre_github_ref K8TRE git ref (commit/branch/tag) string "main" no
k8tre_github_repo K8TRE GitHub organisation and repository to install string "k8tre/k8tre" no
name Name used for most resources string "k8tre-dev" no
number_availability_zones Number of availability zones to use for EKS.
EBS volumes are tied to a single AZ, so if you have multiple AZs you must
ensure you always have sufficient nodes in all AZs to run all pods
that use EBS.
number 1 no
private_subnets Private subnet CIDRs to create. These IPs are used by EKS pods so make it large! list(string)
[
"10.0.64.0/18",
"10.0.128.0/18"
]
no
public_subnets Public subnet CIDRs to create list(string)
[
"10.0.1.0/24",
"10.0.2.0/24"
]
no
region AWS region string "eu-west-2" no
request_certificate Request an ACM certificate (requires manual DNS validation),
create a self-signed certificate,
or none (fully manage certificate yourself)
string "selfsigned" no
vpc_cidr VPC CIDR to create string "10.0.0.0/16" no

Outputs

Name Description
dns_validation_records DNS validation records to be created for ACM certificate
efs_token EFS name creation token
k8tre_argocd_cluster_name K8TRE dev cluster name
k8tre_cluster_name K8TRE dev cluster name
k8tre_eks_access_role K8TRE EKS deployment role ARN
kubeconfig_command_k8tre-argocd-dev Create kubeconfig for k8tre-argocd-dev
kubeconfig_command_k8tre-dev Create kubeconfig for k8tre-dev
name Name used for most resources
service_access_prefix_list ID of the prefix list that can access services running on K8s
vpc_cidr VPC CIDR

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors