Application Logs using Grafana Loki on GKE

Introduction

This is one another requirement that I got from the customer after I’ve done the Cortex setup that I wrote in the previous article here. My customer wants to have a single place to store the application logs (containers logs) from many Google Compute Engine (GCE) instances. Not just only to store the logs, he also wants to query and aggregate those logs, use GCE instance name as the key index for the query.

  • Google Kubernetes Engine (GKE) — Kubernetes cluster on Google Cloud Platform that we will deploy the workloads.
  • Google Cloud Storage (GCS) — The actual storage that we will configure Cortex to use it as the storage.
  • Loki — Data source to store logs.
  • Grafana — The web user interface for log querying.
  • Promtail — The agent that ships the contents of logs to a private Grafana Loki. In this case, the log contents are from containers run on the GCE instances.

Source Code

The source codes relate to this article are kept publicly in the GitHub repository here — https://github.com/its-knowledge-sharing/setup-loki-gke. I would recommend to clone the code and read them at the same time you read this article.

Creating the GKE cluster

I’m assuming that we are familiar to the Google Cloud Platform (GCP) and already have the GCP account.

gcloud container clusters create gke-loki --zone us-central1-a --project ${PROJECT}
gcloud container clusters get-credentials gke-loki --zone us-central1-a --project ${PROJECT}

Creating the GCS — Cloud storage for Loki

GCS is the object storage of GCP, it is comparable to the S3 of Amazon Web Services (AWS). The reason why we use GCS to store logs from Promtail instead of using traditional file system because the maintenance operation is easier (diskspace expansion). The cost of GCS is also cheaper than the traditional file system when compare to the same size.

#!/bin/bashsource .envgsutil mb -b on -l us-central1 -p ${PROJECT} gs://${BUCKET_NAME}/
gsutil ls gs://${BUCKET_NAME}/

Creating service account for Loki to access GCS

In order to allow Loki able to write/read data from GCS bucket, we will need to create the IAM service account and will configure Loki to use it at runtime.

gcloud iam service-accounts create ${SA_NAME} --project ${PROJECT} --display-name="Service account for Loki"
gcloud projects add-iam-policy-binding ${PROJECT} \
--member="serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com" \
--project ${PROJECT} \
--role="roles/storage.objectAdmin"

Deploying Loki to GKE

The easiest way we deploy the application (Loki in this case) into Kubernetes is by using Helm. Loki also provided it’s own Helm chart here https://grafana.github.io/helm-charts/. What we need to do is to create the Helm values file to customize what we need and run some Helm commands to get everything.

# Create service account secretgcloud iam service-accounts keys create ${KEY_FILE} --iam-account=${SA}kubectl delete secret ${SECRET} -n ${NS}kubectl create secret generic ${SECRET} --from-file=gcp-sa-file=${KEY_FILE} -n ${NS}
$ kubectl get secret -n loki gcp-saNAME TYPE DATA AGE
gcp-sa Opaque 1 16m
helm repo add loki-helm https://grafana.github.io/helm-charts/helm template loki loki-helm/loki-distributed \
-f loki/loki.yaml \
-f loki/loki-volume.yaml \
--set customParams.gcsBucket=${BUCKET_NAME} \
--version 0.45.1 \
--namespace ${NS} > tmp-loki.yaml
kubectl apply -n ${NS} -f tmp-loki.yaml
kubectl get pods -n loki
kubectl get svc -n loki
kubectl apply -n ${NS} -f loki/loki-ing.yaml
kubectl get ing -n loki

Deploying Grafana to GKE

The Loki is just the data source that store the data, The easiest way to view those data is by using Grafana. Similar to the Loki we deployed earlier, we will need to deploy the Grafana by using Helm too.

helm repo add grafana-helm https://grafana.github.io/helm-chartshelm template grafana grafana-helm/grafana \
-f grafana/grafana.yaml \
--skip-tests \
--namespace ${NS} > tmp-grafana-loki.yaml
kubectl apply -n ${NS} -f tmp-grafana-loki.yaml
kubectl apply -n ${NS} -f grafana/grafana-ing.yaml
kubectl get ing -n grafana-loki
kubectl get secret grafana-loki \
-n grafana-loki \
-o jsonpath="{.data.admin-password}" | base64 --decode

Create GCE instance

Now this is the time to create the GCE instance and deploy Node Exporter and Promtail. We will deploy Node Exporter just for using it’s logs to send to Loki by using Promtail.

gcloud compute instances create promtail-001 \
--image=projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20220308 \
--image-project=${PROJECT} \
--machine-type=projects/its-artifact-commons/zones/us-central1-a/machineTypes/e2-medium \
--zone=us-central1-a
#!/bin/bashcurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"sudo apt-get -y update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io
sudo systemctl enable docker
# install docker-compose following the guide: https://docs.docker.com/compose/install/sudo curl -L
"https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
#!/bin/bashDATA_DIR=$(pwd)
LOKI_DOMAIN=<change-this>
INSTANCE=$(hostname)
ENV_FILE=.env
sudo cat << EOF > ${ENV_FILE}
DATA_DIR=${DATA_DIR}
INSTANCE=${INSTANCE}
CONTAINERS_LOG_DIR=/var/lib/docker/containers
LOKI_DOMAIN=${LOKI_DOMAIN}
REGION=us-central1
ZONE=us-central1-a
GROUP=demo
EOF
sudo docker-compose up -d --remove-orphans

Verifying the result

We should see the logs in the Grafana “Explore” menu. Try using LogQL below to see if we can see the logs or not.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store