Jul 16, 2020 Finally, using a GitLab Personal access token we updated the DOCKERAUTHCONFIG variable; Make sure to add all variables you project’s Settings CI/CD page. Now, the DOCKERAUTHCONFIG variable should be updated with a new password for each build. – Helpful Resources: GitLab Runner Issue Thread - Pull images from aws ecr.
Updated in 2021
At CALLR, we have been using GitLab for quite a while. We are also using more and more Docker containers. In this post, I’ll show you how we build docker images with a simple .gitlab-ci.yml
file.
Let’s not waste any time.
Here is a .gitlab-ci.yml
file that you can drop in directly without any modification in a project with a working Dockerfile
.
It will:
All docker images will be pushed to the GitLab Container Registry.
Do not use “latest” nor “stable” images when using a CI. Why? Because you want reproducibility. You want your pipeline to work in 6 month. Or 6 years. Latest images will break things. Always target a version. Of course, the same applies to the base image in your Dockerfile. Hence image: docker:20
here.
To speed-up your docker builds, pull the “latest” image ($CI_REGISTRY_IMAGE:latest
) before building , and then build with --cache-from $CI_REGISTRY_IMAGE:latest
. This will make sure docker has the latest image and can leverage layer caching. Chances are you did not change all layers, so the build process will be very fast.
When building, use --pull
to always attempt to pull a newer version of the image. Because you are targeting a specific version, this makes sure you have the latest (security) updates of that version.
In the push jobs, tell GitLab not to clone the source code with GIT_STRATEGY: none
. Since we are just playing with docker pull/push, we do not need the source code. This will speed things up as well.
Finally, keep your Git tags in sync with your Docker tags. If you have not automated this, you have probably found yourself in the situation of wondering “which git tag is this image again?”. No more. Use GitLab “tags” pipelines.
In this tutorial, we'll look at how to configure GitLab CI to continuously deploy a Django and Docker application to Amazon Web Services (AWS) EC2.
Dependencies:
By the end of this tutorial, you will be able to:
Along with Django and Docker, the demo project that we'll be using includes Postgres, Nginx, and Gunicorn.
Curious about how this project was developed? Check out the Dockerizing Django with Postgres, Gunicorn, and Nginx blog post.
Start by cloning down the base project:
To test locally, build the images and spin up the containers:
Navigate to http://localhost:8000/. You should see:
Let's start by setting up an EC2 instance to deploy our application to along with configuring RDS.
First, you'll need to sign up for an AWS account (if you don't already have one).
Setting up your first AWS account?
It's a good idea to create a non-root IAM user, with 'Administrator Access' and 'Billing' policies, and a Billing Alert via CloudWatch to alert you if your AWS usage costs exceed a certain amount. For more info, review Lock Away Your AWS Account Root User Access Keys and Creating a Billing Alarm, respectively.
Log in to the AWS Console, navigate to the EC2 Console and click 'Instances' on the left sidebar. Then, click the 'Launch Instance' button:
Next, stick with the basic Amazon Linux AMI with the t2.micro
Instance Type:
Click 'Next: Configure Instance Details'. We'll stick with the default VPC to keep things simple for this tutorial, but feel free to update this.
Click the 'Next' button a few more times until you're on the 'Configure Security Group' step. Create a new Security Group (akin to a firewall) called django-security-group
, making sure at least HTTP 80 and SSH 22 are open.
Click 'Review and Launch'.
On the next page click 'Launch'. On the modal, create a new Key Pair so you can connect to the instance over SSH. Save this .pem file somewhere safe.
On a Mac or a Linux box? It's recommended to save the .pem file to the '/Users/$USER/.ssh' directory. Be sure to set the proper permissions as well -- i.e., chmod 400 ~/.ssh/django.pem
.
Click 'Launch Instances' to create the new instance. On the 'Launch Status' page, click 'View Instances'. Then, on the main instances page, grab the public IP of your newly created instance:
With the instance up and running, we can now install Docker on it.
SSH into the instance using your Key Pair like so:
Start by installing and starting the latest version of Docker and version 1.29.2 of Docker Compose:
Add the ec2-user
to the docker
group so you can execute Docker commands without having to use sudo
:
Next, generate a new SSH key:
Save the key to /home/ec2-user/.ssh/id_rsa and don't set a password. This will generate a public and private key -- id_rsa and id_rsa.pub, respectively. To set up passwordless SSH login, copy the public key over to the authorized_keys file and set the proper permissions:
Copy the contents of the private key:
Exit the remote SSH session. Set the key as an environment variable on your local machine:
Add the key to the ssh-agent:
To test, run:
Then, create a new directory for the app:
Moving along, let's spin up a production Postgres database via AWS Relational Database Service (RDS).
Navigate to Amazon RDS, click 'Databases' on the sidebar, and then click the 'Create database' button.
For the 'Engine options', Select the 'PostgreSQL' engine and the PostgreSQL 12.7-R1
version.
Use the 'Free Tier' template.
For more on the free tier, review the AWS Free Tier guide.
Under 'Settings':
djangodb
webapp
Scroll down to the 'Connectivity' section. Stick with the default 'VPC' and select the django-security-group
Security group. Turn off 'Public accessibility'.
Under 'Additional configuration', change the 'Initial database name' to django_prod
and then create the new database.
Click the 'View credential details' button to view the generated password. Take note of it.
It will take a few minutes for the RDS instance to spin up. Once it's available, take note of the endpoint. For example:
The full URL will look something like this:
Keep in mind that you cannot access the database outside the VPC. So, if you want to connect to it directly, you'll need to use SSH tunneling via SSHing into the EC2 instance and connecting to the database from there. We'll look at how to do this shortly.
Sign up for a GitLab account (if necessary), and then create a new project (again, if necessary).
Next, add a GitLab CI/CD config file called .gitlab-ci.yml to the project root:
Here, we defined a single build
stage where we:
IMAGE
, WEB_IMAGE
, and NGINX_IMAGE
environment variablesAdd the setup_env.sh file to the project root:
This file will create the required .env file, based on the environment variables found in your GitLab project's CI/CD settings (Settings > CI / CD > Variables). Add the variables based on the RDS connection information from above.
For example:
SECRET_KEY
: 9zYGEFk2mn3mWB8Bmg9SAhPy6F4s7cCuT8qaYGVEnu7huGRKW9
SQL_DATABASE
: djangodb
SQL_HOST
: djangodb.c7kxiqfnzo9e.us-west-1.rds.amazonaws.com
SQL_PASSWORD
: 3ZQtN4vxkZp2kAa0vinV
SQL_PORT
: 5432
SQL_USER
: webapp
Once done, commit and push your code up to GitLab to trigger a new build. Make sure it passes. You should see the images in the GitLab Container Registry:
Next, before adding deployment to the CI process, we need to update the inbound ports for the 'Security Group' so that port 5432 can be accessed from the EC2 instance. Why is this necessary? Turn to app/entrypoint.prod.sh:
Here, we're waiting for the Postgres instance to be healthy, by testing the connection with netcat, before starting Gunciorn. If port 5432 isn't open, the loop will continue forever.
So, navigate to the EC2 Console again and click 'Security Groups' on the left sidebar. Select the django-security-group
Security Group and click 'Edit inbound rules':
Click 'Add rule'. Under type, select 'PostgreSQL' and under source select the django-security-group
Security Group:
Now, any AWS services associated with that group can access the RDS instance through port 5432. Click 'Save rules'.
Next, add a deploy
stage to .gitlab-ci.yml and create a global before_script
that's used for both stages:
So, in the deploy
stage we:
Add deploy.sh to the project root:
So, after SSHing into the server, we
Add the EC2_PUBLIC_IP_ADDRESS
and PRIVATE_KEY
environment variables to GitLab.
Update the setup_env.sh file:
Next, add the server's IP to the ALLOWED_HOSTS
list in the Django settings.
Commit and push your code to trigger a new build. Once the build passes, navigate to the IP of your instance. You should see:
Need to access the database?
SSH into the box:
Install Postgres:
Then, run psql
, like so:
Enter the password.
Exit the SSH session once done.
Finally, update the deploy
stage so that it only runs when changes are made to the master
branch:
To test, create a new develop
branch. Add an exclamation point after world
in urls.py:
Commit and push your changes to GitLab. Ensure only the build
stage runs. Once the build passes open a PR against the master
branch and merge the changes. This will trigger a new pipeline with both stages -- build
and deploy
. Ensure the deploy works as expected:
This tutorial looked at how to configure GitLab CI to continuously deploy a Django and Docker application to AWS EC2.
At this point, you'll probably want to use a domain name rather than an IP address. To do so, you'll need to:
Looking for a challenge? To automate this entire process, so you don't need to manually provision a new instance and install Docker on it each time, set up Elastic Container Service. For more on this, review the Deploying a Flask and React Microservice to AWS ECS course and the Deploying Django to AWS ECS with Terraform tutorial.
You can find the final code in the django-gitlab-ec2 repo.