Deploy ‘Django’ application with CI/CD pipeline on GCP services (Code Repo + Could Build+ Artifact registry + GKE

Ashutosh Dash
6 min readDec 15, 2021


GCP CICD pipeline

CI/CD means Continuous integration and Continuous deployment. We all know about it. Now there are multiple way to create a pipeline and I have experienced developing them with GCP services and Open source tools (like Jenkins and Git etc) as well. But this particular story is about my experience on deploying python-Django based microservice applications on GKE with a CICD pipeline with GCP services ( Code repo, Could build, Container Registry, GKE)

Not mentioning the details and benefits of the GCP services here. You can google it. Let’s jump to the process.

GCP Code Repository:

my GCP Code repo screenshot

Creating a code repository and pushing you application code to the repository in GCP Cloud repo is almost same as any other repo ( git, bitbucket etc.). It’s just that we get we get a GUI view of the repo story here and all the commands to clone the repo and etc. are provided by the GUI it self. You can explore it more in details. It is a compotator of git repo we can say that.

Django Application:

Basic Django project structure.

Django application project must have below structure, Where ( require to run the application/ entry point to the app) and env-requirement.txt (all the python dependencies library are listed here). For deploying application I use ‘Gunicorn’ server which is a Python Web Server Gateway Interface HTTP server. and not using any server like NGINX for reverse proxy and all. Will explain later why. that’s all my applications dependencies.

Creating a Dockerfile:

A Dockerfile is a text file that contains instructions on how the Docker image will be built. A Dockerfile contains the directives below.

My typical Dockerfile for Django app looks like this.
  • FROM: directive sets the base image from which the Docker container will be built.
  • WORKDIR: directive sets the working directory in the image created.
  • RUN: directive executes commands in the container.
  • COPY: directive copies files from the file system into the container.
  • CMD: directive sets the executable commands within the container.

Building the docker image: $ docker build --tag django_todo:latest .

There are multiple predefined images already available in the internet for docker containers which you can use here with the following command. But I mostly use the image that I have created for my previous application or created by my teammates.

docker run — name ashutosh-c1 Django_image_name

--tag sets the tag for the image. For example, we are creating a Docker image from python:3.8.3 that has the tag alpine. In our Docker image, latest is the tag set.

Command to see the created docker image details. “docker ps -a” , Where ps(process status) -a(all).

or, docker ps -aq , for getting container ids.

Create GKE cluster:

Kubernetes pods can communicate with other pods, regardless of which host. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports.

  1. for creating 1st time a GKE cluster-> click on enable-> create a cluster as per the below screenshot. Very simple.
Enable Kubernetes Engine API for the 1st time use.
Select a standard configurtation
Once the cluster is up and running it will show the green tick mark.
Workload-> select existing docker image

2. go to Workload and select existing docker image. That we have created before.

just select the image path and name
add an application name for which the cluster is created.
  1. IN the YML configuration you will see two K8 API resources will deploy. one is deployment and the other one is horizontal pod auto scaler.
go to details to check the status of the cluster.

Now for entering into the application or mapping the application port where it has been running select 8000 for the Django application. and click expose. Now the port 8000 has been exposed as an entry point to your Django application in the cluster.

adding a port number.
Cluster configuration once everything is done.

Once the cluster is up and running and all the configuration is done the configuration will be looking like above. and now you can access the application with the IP address of the load balancer +:+ port number. ex:

Pushing the docker image to the registry:

GCP provides a service called Artifact Registry which can be used to store docker images, node packages, python packages, etc. Initially, it was just a Container registry for storing container images only.

Create Deploy pipeline for GKE: with Cloud build.

With cloud build, we can create CICD processes very easily. We just need to create a trigger and a yml configuration file for the cloud build-in-out code while committing it to the GCP repo. Follow the below processes.

Create a yml file and push it to the code repo with the below configuration. Now, this is a simple demo config file you can add multiple “Actions” here. read the comments carefully in the screenshots.

basic config for building, push and updating the container using kubectl set.

Now for further making this process auto-mated. I mean, for making the process auto-deploy the image on every code commit, we have to add some more configurations.

CICD config.

Now, all we have to do is create a trigger in cloud build. Select the create trigger, select code source which is cloud repo here, then select the CICD-services and continue.

steps for creating a trigger.

Next is you have to choose the path to the build**. yml file from the code repo.

select the path to the configuration build file.

Now everything is ready, You just have to test if after the new code comit it is triggering the cicd process or not. if not then go into build history click on the cicD process and look for the step where it is failing.

Success on the Cloud CICD pipline.



Ashutosh Dash

This blog is more like keeping technical notes from my learnings for the future.