GoCD on Kubernetes using Terraform: Setting up GoCD
Have you ever been in a team where you had to setup GoCD server and agents(without Kubernetes)? It could be tiring and some of the challenges that we face with agents are not limited to continuously monitoring the load of it, agents running idle, clear segregation of environments and so on. Overall the effort and the cost is enormous.
Can you think of setting up GoCD as one time effort and that too by maximum one hour? Thanks to Kubernetes and the supporting tools that made the job easier for us. Let us see how to achieve it.
You might need to have basic understanding on the following to get along:
We will use terraform to deploy on Google Kubernetes Engine. The steps should be similar for any other cloud providers.
We had split the complete setup into three blogs. In this blog, we shall complete:
- Create Kubernetes Cluster
- Setup GoCD in GKE
- Expose GoCD to public
In the next blog we will run through:
And finally:
The complete setup of GoCD on terraform can be viewed in github. We shall understand each part of the script in this series.
Note: Version of tools/libraries are subjected to change. We have used the latest version as of date.
Create Kubernetes Cluster
- Create a new project in Google Cloud if you don't have one.
console.cloud.google.com -> New Project
- Create service account
console.cloud.google.com/iam-admin/servicea.. -> Create Service Account -> Select Role as Editor for now(later you can limit access) -> Create Key -> Download json key file and keep it in your current working directory.
- In local, create a file with extension
.tf
.For this example we can name this file as
gocd.tf
. This is the only terraform file we will be using.
First part of the terraform script contain the provider.
provider "google" {
credentials = "${file("./<service-account-cred>.json")}"
project = "<project-id>"
region = "us-east4-a"
}
Replace with the key file name, downloaded earlier. Replace with the project ID.
Note: Project name and Project ID can be different. Here you need to give Project ID.
Next, we need to provide vpc network. If not, it defaults to default vpc network. It is recommended to create our own vpc network to avoid conflict.
resource "google_compute_network" "vpc_network" {
name = "gocd-vpc-network"
}
Now we need to create kubernetes cluster. It is good practice to avoid using default node pools. We are using custom node pools and these nodes can auto scale based on the min_node_count and max_node_count.
resource "google_container_cluster" "ci" {
name = "gocd-cluster"
network = google_compute_network.vpc_network.name
location = "us-east4-a"
initial_node_count = 1
remove_default_node_pool = true
depends_on = [
"google_compute_network.vpc_network"]
}
resource "google_container_node_pool" "ci_nodes" {
name = "gocd-node-pool"
location = "us-east4-a"
cluster = google_container_cluster.ci.name
node_config {
machine_type = "n1-standard-2"
}
autoscaling {
min_node_count = 3
max_node_count = 5
}
depends_on = [
"google_container_cluster.ci"]
}
Now it is time to deploy our terraform script.
To run terraform script for the first time. You need to install terraform in local machine and run the below scripts.
terraform init
View the plan
terraform plan
Apply
terraform apply
It will take some time to create the cluster. You can view the created cluster at console.cloud.google.com/kubernetes/list.
Setup GoCD
Our Kubernetes cluster is ready. Now, we will be using Helm chart to setup GoCD. For helm and kubernetes to interact with the cluster, we need to configure provider.
Mention the latest terraform provider helm version.
data "google_client_config" "current" {}
provider "helm" {
version = "v1.1.1"
kubernetes {
load_config_file = false
host = "${google_container_cluster.ci.endpoint}"
token = "${data.google_client_config.current.access_token}"
client_certificate = "${base64decode(google_container_cluster.ci.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.ci.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.ci.master_auth.0.cluster_ca_certificate)}"
}
}
provider "kubernetes" {
load_config_file = false
host = "${google_container_cluster.ci.endpoint}"
token = "${data.google_client_config.current.access_token}"
client_certificate = "${base64decode(google_container_cluster.ci.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.ci.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.ci.master_auth.0.cluster_ca_certificate)}"
}
Create namespace gocd
to install helm in gocd
namespace.
resource "kubernetes_namespace" "gocd_namespace" {
metadata {
name = "gocd"
}
depends_on = [google_container_node_pool.ci_nodes]
}
Helm, no more need tiller so it is much simpler to set up.
resource "helm_release" "gocd" {
name = "gocd"
chart = "stable/gocd"
namespace = kubernetes_namespace.gocd_namespace.metadata.0.name
depends_on = [kubernetes_namespace.gocd_namespace]
}
Download the plugins for provider helm and kubernetes.
terraform init
Apply the changes in terraform file.
terraform apply
Verification
In local, you should have gcloud and kubectl to verify
- Use gcloud to authenticate with the Cluster. console.cloud.google.com/kubernetes/list -> connect - will give the command to connect to the cluster. Execute it.
- You can see the state of GoCD server by running:
kubectl get pods -n gocd
- GoCD is still not exposed to public. You can port forward to have a glance at the UI.
After running the above command, you can view the gocd UI at localhost:8153.
kubectl port-forward svc/gocd-server 8153:8153 -n gocd
Expose GoCD to public
Create a static public ip using the command below if you don't have one.
gcloud compute addresses create gocd-public-ip --region us-east4
The below command will list the public ip that has been created.
gcloud compute addresses list --filter=region:us-east4
Configure the created public ip in nginx.
resource "helm_release" "nginx_ingress" {
name = "nginx-ingress"
chart = "stable/nginx-ingress"
set {
name = "controller.service.loadBalancerIP"
value = "<gocd-public-ip>"
}
depends_on = [google_container_node_pool.ci_nodes]
}
Replace with the public ip.
Configure gocd enviroment variable to use the nginx.
resource "helm_release" "gocd" {
name = "gocd"
chart = "stable/gocd"
namespace = kubernetes_namespace.gocd_namespace.metadata.0.name
depends_on = [kubernetes_namespace.gocd_namespace]
values = [
<<EOF
server:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
EOF
]
}
terraform apply
Now you should be able to access the GoCD using the public ip.
Continue reading Configuring SSL using Let's Encrypt
Credits to Selvakumar Natesan for directions.