Infrastructure As Code: Automated Deployments With Ansible
Automate, automate, automate.
Now that our server is up and running, we want to install our app on it, using our Docker image and container.
We could do this manually, but a key insight of modern software engineering is that small, frequent deployments are a must.[1] Frequent deployments rely on automation, so we’ll use an infrastructure automation tool called Ansible.
Automation is also key to making sure our tests give us true confidence over our deployments. If we go to the trouble of building a staging server,[2] we want to make sure that it’s as similar as possible to the production environment. By automating the way we deploy, and using the same automation for staging and prod, we give ourselves much more confidence.
The buzzword for automating your deployments these days is "Infrastructure as Code" (IaC).
Why not ping me a note once your site is live on the web, and send me the URL? It always gives me a warm and fuzzy feeling… Email me at [email protected]. |
A First Cut of an Ansible Playbook for Deployment
Let’s start using Ansible a little more seriously. We’re not going to jump all the way to the end though! Baby steps, as always. Let’s see if we can get it to run a simple "hello world" Docker container on our server.
Let’s delete the old content which had the "ping", and replace it with something like this:
---
- hosts: all
tasks:
- name: Install docker (1)
ansible.builtin.apt: (2)
name: docker.io (3)
state: latest
update_cache: true
become: true
- name: Run test container
community.docker.docker_container:
name: testcontainer
state: started
image: busybox
command: echo hello world
become: true
1 | An Ansible playbook is a series of "tasks"; we now have more than one.
In that sense it’s still quite sequential and procedural,
but the individual tasks themselves are quite declarative.
Each one usually has a human-readable name attribute. |
2 | Each task uses an Ansible "module" to do its work.
This one uses the builtin.apt module which provides a wrapper
around the apt Debian & Ubuntu package management tool. |
3 | Each module then provides a bunch of parameters which control how it works.
Here we specify the name of the package we want to install ("docker.io"[3])
and tell it to update its cache first, which is required on a fresh server. |
Most Ansible modules have pretty good documentation,
check out the builtin.apt
one for example;
I often skip to the
Examples section.
Let’s re-run our deployment command, ansible-playbook
,
with the same flags we used in the last chapter.
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -vv ansible-playbook [core 2.16.3] config file = None [...] No config file found; using defaults BECOME password: Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: deploy-playbook.yaml ********************************************** 1 plays in infra/deploy-playbook.yaml PLAY [all] ******************************************************************** TASK [Gathering Facts] ******************************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:2 ok: [staging.ottg.co.uk] PLAYBOOK: deploy-playbook.yaml ********************************************** 1 plays in infra/deploy-playbook.yaml TASK [Install docker] ********************************************************* task path: ...goat-book/superlists/infra/deploy-playbook.yaml:6 ok: [staging.ottg.co.uk] => {"cache_update_time": 1708981325, "cache_updated": true, "changed": false} TASK [Install docker] ************************************************************************************************************* task path: ...goat-book/superlists/infra/deploy-playbook.yaml:6 changed: [staging.ottg.co.uk] => {"cache_update_time": [...] "cache_updated": true, "changed": true, "stderr": "", "stderr_lines": [], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading [...] information...\nThe following additional packages will be installed:\n wmdocker\nThe following NEW packages will be installed:\n docker wmdocker\n0 TASK [Run test container] ***************************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:13 changed: [staging.ottg.co.uk] => {"changed": true, "container": {"AppArmorProfile": "docker-default", "Args": ["hello", "world"], "Config": [...] PLAY RECAP ******************************************************************** staging.ottg.co.uk : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I don’t know about you, but whenever I make a terminal spew out a stream of output, I like to make little brrp brrp brrp noises, a bit like the computer Mother, in Alien. Ansible scripts are particularly satisfying in this regard.
You may need to use the --ask-become-pass argument to ansible-playbook
if you get an error "Missing sudo password".
|
SSHing Into the Server and Viewing Container Logs
Ansible looks like it’s doing its job, but let’s practice our SSH skills, and do some good old-fashioned sysadmin. Let’s log into our server and see if we can see any actual evidence that our container has run.
We use docker ps -a
to view all containers, including old/stopped ones,
and we can use docker logs
to view the output from one of them:
$ ssh [email protected] Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-67-generic x86_64) [...] elspeth@server$ sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a2e600fbe77 busybox "echo hello world" 2 days ago Exited (0) 10 minutes ago testcontainer elspeth@server:$ sudo docker logs testcontainer hello world
Look out for that elspeth@server
in the command-line listings in this chapter.
It indicates commands that must be run on the server,
as opposed to commands you run on your own PC.
|
SSHing in to check things worked is a key server debugging skill! It’s something we want to practice on our staging server, because ideally we’ll want to avoid doing it on production machines.
Allowing Rootless Docker access
Having to use sudo
or become=True
to run Docker commands is a bit of a pain.
If we add the our user to the docker
group, we can run Docker commands without sudo
.
- name: Install docker
[...]
- name: Add our user to the docker group, so we don't need sudo/become
ansible.builtin.user: (1)
name: '{{ ansible_user }}' (2)
groups: docker
become: true
- name: Reset ssh connection to allow the user/group change to take effect
ansible.builtin.meta: reset_connection (3)
- name: Run test container (4)
[...]
1 | We use the builtin.user module to add our user to the docker group. |
2 | The {{ ... }} syntax allows us to interpolate some variables into
our config file, much like in a Django template.
ansible_user will be the user we’re using to connect to the server,
ie "elspeth" in my case. |
3 | As per the task name, we need this for the user/group change to take effect.
Strictly speaking this is only needed the first time we run the script;
if you’ve got some time, you can read up on how to
make tasks conditional
and configure it to only run if the builtin.user tasks has actually made a change. |
4 | We can remove the become: true from this task and it should still work. |
Let’s run that:
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -vv PLAYBOOK: deploy-playbook.yaml ************************************************ 1 plays in infra/deploy-playbook.yaml PLAY [all] ******************************************************************** TASK [Gathering Facts] ******************************************************** [...] ok: [staging.ottg.co.uk] TASK [Install docker] ********************************************************* [...] ok: [staging.ottg.co.uk] => {"cache_update_time": 1738767216, "cache_updated": true, "changed": false} TASK [Add our user to the docker group, so we don't need sudo/become] ********* [...] changed: [staging.ottg.co.uk] => {"append": false, "changed": true, [...] "", "group": 1000, "groups": "docker", [...] TASK [Reset ssh connection to allow the user/group change to take effect] ***** [...] META: reset connection TASK [Run test container] ***************************************************** [...] changed: [staging.ottg.co.uk] => {"changed": true, "container": [...] PLAY RECAP ******************************************************************** staging.ottg.co.uk : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And check that it worked:
elspeth@server$ docker ps -a # no sudo yay! CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bd3114e43f55 busybox "echo hello world" 12 minutes ago Exited (0) 6 seconds ago testcontainer elsepth@server$ docker logs testcontainer hello world hello world
Sure enough, we no longer need sudo
,
and we can see a new version of the container just ran.
You know, that’s worthy of a commit!
$ git add infra/deploy-playbook.yaml $ git commit -m "Made a start on an ansible playbook for deployment"
Let’s move on to trying to get our actual docker container running on the server. As we go through, you’ll see that we’re going to work through very similar issues to the ones we’ve already figured our way through in the last couple of chapters:
-
Configuration
-
Networking
-
And the database.
Getting our image onto the server
Typically, you can "push" and "pull" container images to a "container registry" — Docker offers a public one called DockerHub, and organisations will often run private ones, hosted by cloud providers like AWS.
So your process of getting an image onto a server is usually
-
Push the image from your machine to the registry
-
Pull the image from the registry onto the server. Usually this step is implicit, in that you just specify the image name in the format registry-url/image-name:tag, and then
docker run
takes care of pulling down the image for you.
But I don’t want to ask you to create a DockerHub account, or implicitly endorse any particular provider, so we’re going to "simulate" this process by doing it manually.
It turns out you can "export" a container image to an archive format, manually copy that to the server, and then re-import it. In Ansible config, it looks like this:
- name: Install docker
[...]
- name: Add our user to the docker group, so we don't need sudo/become
[...]
- name: Reset ssh connection to allow the user/group change to take effect
[...]
- name: Export container image locally (1)
community.docker.docker_image:
name: superlists
archive_path: /tmp/superlists-img.tar
source: local
delegate_to: 127.0.0.1
- name: Upload image to server (2)
ansible.builtin.copy:
src: /tmp/superlists-img.tar
dest: /tmp/superlists-img.tar
- name: Import container image on server (3)
community.docker.docker_image:
name: superlists
load_path: /tmp/superlists-img.tar
source: load
force_source: true (4)
state: present
- name: Run container
community.docker.docker_container:
name: superlists
image: superlists (5)
state: started
recreate: true (6)
If you see an error saying "Error connecting: Error while fetching server API version",
it may be because the Python Docker SDK can’t find your docker daemon.
Try restarting Docker Desktop if you’re on Windows or a Mac.
If you’re not using the standard docker engine, with Colima for example,
you may need to set the DOCKER_HOST environment variable
(DOCKER_HOST=unix:///$HOME/.colima/default/docker.sock )
or use a symlink to point to the right place.
See the
Colima FAQ.
|
1 | We export the docker image to a .tar file by using the docker_image module
with the archive_path set to temp file, and setting the delegate_to attribute
to say we’re running that command on our local machine rather than the server. |
2 | We then use the copy module to upload the tarfile to the server |
3 | And we use docker_image again but this time with load_path and source: load
to import the image back on the server |
4 | the force_source flag tells the server to attempt the import,
even if an image of that name already exists. |
5 | We change our "Run container" task to use the superlists image,
and we’ll use that as the container name too. |
6 | Similarly to source: load , the recreate argument tells Ansible
to recreate the container even if there’s already one running
whose name and image match "superlists". |
Let’s run the new version of our playbook, and see if we can upload a docker image to our server and get it running:
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -vv [...] PLAYBOOK: deploy-playbook.yaml ********************************************** 1 plays in infra/deploy-playbook.yaml PLAY [all] ******************************************************************** TASK [Gathering Facts] ******************************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:2 ok: [staging.ottg.co.uk] TASK [Install docker] ********************************************************* task path: ...goat-book/superlists/infra/deploy-playbook.yaml:5 ok: [staging.ottg.co.uk] => {"cache_update_time": 1708982855, "cache_updated": false, "changed": false} TASK [Add our user to the docker group, so we don't need sudo/become] ********* task path: ...goat-book/infra/deploy-playbook.yaml:11 ok: [staging.ottg.co.uk] => {"append": false, "changed": false, [...] TASK [Reset ssh connection to allow the user/group change to take effect] ***** task path: ...goat-book/infra/deploy-playbook.yaml:17 META: reset connection TASK [Export container image locally] ***************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:20 changed: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Archived image superlists:latest to /tmp/superlists-img.tar, overwriting archive with image 11ff3b83873f0fea93f8ed01bb4bf8b3a02afa15637ce45d71eca1fe98beab34 named superlists:latest"], "changed": true, "image": {"Architecture": "amd64", [...] TASK [Upload image to server] ************************************************* task path: ...goat-book/superlists/infra/deploy-playbook.yaml:27 changed: [staging.ottg.co.uk] => {"changed": true, "checksum": "313602fc0c056c9255eec52e38283522745b612c", "dest": "/tmp/superlists-img.tar", [...] TASK [Import container image on server] *************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:32 changed: [staging.ottg.co.uk] => {"actions": ["Loaded image superlists:latest from /tmp/superlists-img.tar"], "changed": true, "image": {"Architecture": "amd64", "Author": "", "Comment": "buildkit.dockerfile.v0", "Config": [...] TASK [Run container] ********************************************************** task path: ...goat-book/superlists/infra/deploy-playbook.yaml:40 changed: [staging.ottg.co.uk] => {"changed": true, "container": {"AppArmorProfile": "docker-default", "Args": ["--bind", ":8888", "superlists.wsgi:application"], "Config": {"AttachStderr": true, "AttachStdin": false, "AttachStdout": true, "Cmd": ["gunicorn", "--bind", ":8888", "superlists.wsgi:application"], "Domainname": "", "Entrypoint": null, "Env": [...] staging.ottg.co.uk : ok=7 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
For completeness, let’s also add a step to explicitly build the image locally.
This means we don’t have a dependency on having run docker build
locally.
- name: Reset ssh connection to allow the user/group change to take effect
[...]
- name: Build container image locally
community.docker.docker_image:
name: superlists
source: build
state: present
build:
path: ..
platform: linux/amd64 (1)
force_source: true
delegate_to: 127.0.0.1
- name: Export container image locally
[...]
1 | I needed this platform attribute to work around an issue
with compatibility between Apple’s new ARM-based chips and our server’s
x86/amd64 architecture.
You could also use this platform: to cross-build docker images
for a Rasberry Pi from a regular PC, or vice-versa.
It does no harm in any case. |
Now let’s see if it works!
$ ssh [email protected] Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-67-generic x86_64) [...] elspeth@server$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a2e600fbe77 busybox "echo hello world" 2 days ago Exited (0) 10 minutes ago testcontainer 129e36a42190 superlists "/bin/sh -c \'gunicor…" About a minute ago Exited (3) About a minute ago superlists
OK! We can see our "superlists" container is there now. The Status: Exited is a bit more worrying though.
Let’s take a look at the logs of our new container, to see if we can figure out what’s happened:
elspeth@server:$ docker logs superlists [2024-02-26 22:19:15 +0000] [1] [INFO] Starting gunicorn 21.2.0 [2024-02-26 22:19:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:8888 (1) [2024-02-26 22:19:15 +0000] [1] [INFO] Using worker: sync [...] File "/src/superlists/settings.py", line 22, in <module> SECRET_KEY = os.environ["DJANGO_SECRET_KEY"] ~~~~^^^^^^^ File "<frozen os>", line 685, in getitem KeyError: DJANGO_SECRET_KEY [2024-02-26 22:19:15 +0000] [7] [INFO] Worker exiting (pid: 7) [2024-02-26 22:19:15 +0000] [1] [ERROR] Worker (pid:7) exited with code 3 [2024-02-26 22:19:15 +0000] [1] [ERROR] Shutting down: Master [2024-02-26 22:19:15 +0000] [1] [ERROR] Reason: Worker failed to boot.
Oh whoops, it can’t find the DJANGO_SECRET_KEY
environment variable.
We need to set those environment variables on the server too.
Using an env File to Store Our Environment Variables
When we run our container manually locally,
we can pass in environment variables with the -e
flag.
But we don’t want to hard-code secrets like SECRET_KEY into our Ansible files
and commit them to our repo!
Instead, we can use Ansible to automate the creation of a secret key, and then save it to a file on the server, where it will be relatively secure (better than saving it to version control and pushing it to GitHub in any case!)
We can use a so-called "env file" to store environment variables.
Env files are essentially a list of key-value pairs using shell syntax,
a bit like you’d use with export
.
One extra subtlety is that we want to vary the actual contents of the env file, depending on where we’re deploying to. Each server should get its own unique secret key, and we want different config for staging and prod, for example.
So, just as we inject variables into our html templates in Django, we can use a templating language called "jinja2" to have variables in our env file. It’s a common tool in Ansible scripts, and the syntax is very similar to Django’s.
Here’s what our template for the env file will look like:
DJANGO_DEBUG_FALSE=1
DJANGO_SECRET_KEY={{ secret_key }}
DJANGO_ALLOWED_HOST={{ host }}
And here’s how we use it in the provisioning script:
- name: Import container image on server
[...]
- name: Ensure .env file exists
ansible.builtin.template: (1)
src: env.j2
dest: ~/superlists.env
force: false # do not recreate file if it already exists. (2)
vars: (3)
host: "{{ inventory_hostname }}" (4)
secret_key: "{{ lookup('password', '/dev/null length=32 chars=ascii_letters') }}" (5)
- name: Run container
community.docker.docker_container:
name: superlists
image: superlists
state: started
recreate: true
env_file: ~/superlists.env (6)
1 | We use ansible.builtin.template to specify the local template file to use (src ),
and the destination (dest ) on the server |
2 | force: false means we will only write the file once.
So after the first time we generate our secret key, it won’t change. |
3 | The vars section defines the variables we’ll inject into our template. |
4 | We actually use a built-in Ansible variable called inventory_hostname .
This variable would actually be available in the template already,
but I’m renaming it for clarity. |
5 | This lookup('password') thing I copy-pasted from StackOverflow.
Come on there’s no shame in that. |
6 | Here’s where Ansible tells Docker to use our env file when it runs our container. |
Using an env file to store secrets is definitely better than committing it to version control, but it’s maybe not the state of the art either. You’ll probably come across more advanced alternatives from various cloud providers, or Hashicorp’s Vault tool. |
Let’s run the latest version of our playbook and see how our tests get on:
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -v [...] PLAYBOOK: deploy-playbook.yaml ********************************************** 1 plays in infra/deploy-playbook.yaml PLAY [all] ******************************************************************** TASK [Gathering Facts] ******************************************************** ok: [staging.ottg.co.uk] TASK [Install docker] ********************************************************* ok: [staging.ottg.co.uk] => {"cache_update_time": 1709136057, "cache_updated": false, "changed": false} TASK [Build container image locally] ****************************************** changed: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Built image [...] TASK [Export container image locally] ***************************************** changed: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Archived image [...] TASK [Upload image to server] ************************************************* changed: [staging.ottg.co.uk] => {"changed": true, [...] TASK [Import container image on server] *************************************** changed: [staging.ottg.co.uk] => {"actions": ["Loaded image [...] TASK [Ensure .env file exists] ************************************************ changed: [staging.ottg.co.uk] => {"changed": true, [...] TASK [Run container] ********************************************************** changed: [staging.ottg.co.uk] => {"changed": true, "container": [...] PLAY RECAP ******************************************************************** staging.ottg.co.uk : ok=8 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Looks good! What do our tests think?
More debugging
We run our tests as usual and run into a new problem:
$ TEST_SERVER=staging.ottg.co.uk python src/manage.py test functional_tests [...] selenium.common.exceptions.WebDriverException: Message: Reached error page: about:neterror?e=connectionFailure&u=http%3A//staging.ottg.co.uk/[...]
That neterror
makes me think it’s another networking problem.
If your domain provider puts up a temporary holding page, you may get a 404 rather than a connection error at this point, and the traceback might have NoSuchElementException instead. |
Let’s try our standard debugging technique, of using curl
both locally and then from inside the container on the server.
First, on our own machine:
$ curl -iv staging.ottg.co.uk [...] curl: (7) Failed to connect to staging.ottg.co.uk port 80 after 25 ms: Couldn't connect to server
Similarly, depending on your domain/hosting provider, you may see "Host not found" here instead. |
Now let’s ssh in to our server and take a look at the docker logs:
elspeth@server$ docker logs superlists [2024-02-28 22:14:43 +0000] [7] [INFO] Starting gunicorn 21.2.0 [2024-02-28 22:14:43 +0000] [7] [INFO] Listening at: http://0.0.0.0:8888 (7) [2024-02-28 22:14:43 +0000] [7] [INFO] Using worker: sync [2024-02-28 22:14:43 +0000] [8] [INFO] Booting worker with pid: 8
No errors there. Let’s try our curl
:
elspeth@server$ curl -iv localhost * Trying 127.0.0.1:80... * connect to 127.0.0.1 port 80 failed: Connection refused * Trying ::1:80... * connect to ::1 port 80 failed: Connection refused * Failed to connect to localhost port 80 after 0 ms: Connection refused * Closing connection 0 curl: (7) Failed to connect to localhost port 80 after 0 ms: Connection refused
Hmm, curl
fails on the server too.
But all this talk of port 80
, both locally and on the server, might be giving us a clue.
Let’s check docker ps
:
elspeth@server:$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1dd87cbfa874 superlists "/bin/sh -c 'gunicor…" 9 minutes ago Up 9 minutes superlists
This might be ringing a bell now—we forgot the ports.
We want to map port 8888 inside the container as port 80 (the default web/http port) on the server:
- name: Run container
community.docker.docker_container:
name: superlists
image: superlists
state: started
recreate: true
env_file: ~/superlists.env
ports: 80:8888
That gets us to:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [id="id_list_table"]; [...]
Mounting the database on the server and running migrations
Taking a look at the logs from the server, we can see that the database is not initialised:
$ ssh elspeth@server docker logs superlists [...] django.db.utils.OperationalError: no such table: lists_list
We need to mount the db.sqlite3
file from the filesystem outside the container,
just like we did in local dev, and we need to run migrations each time we deploy too.
Here’s how to do that in our playbook:
- name: Ensure db.sqlite3 file exists outside container
ansible.builtin.file:
path: /home/elspeth/db.sqlite3
state: touch (1)
- name: Run container
community.docker.docker_container:
name: superlists
image: superlists
state: started
recreate: true
env_file: ~/superlists.env
mounts: (2)
- type: bind
source: /home/elspeth/db.sqlite3
target: /src/db.sqlite3
ports: 80:8888
- name: Run migration inside container
community.docker.docker_container_exec: (3)
container: superlists
command: ./manage.py migrate
1 | We use file with state=touch to make sure a placeholder file exists
before we try and mount it in |
2 | Here is the mounts config, which works a lot like the --mount flag to
docker run . |
3 | And we use the API for docker exec to run the migration command inside
the container. |
Let’s give that playbook a run and…
$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -v [...] TASK [Run migration inside container] ***************************************** changed: [staging.ottg.co.uk] => {"changed": true, "rc": 0, "stderr": "", "stderr_lines": [], "stdout": "Operations to perform:\n Apply all migrations: auth, contenttypes, lists, sessions\nRunning migrations:\n Applying contenttypes.0001_initial... OK\n Applying contenttypes.0002_remove_content_type_name... OK\n Applying auth.0001_initial... OK\n Applying auth.0002_alter_permission_name_max_length... OK\n Applying [...] PLAY RECAP ******************************************************************** staging.ottg.co.uk : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
It workssss
Hooray
$ TEST_SERVER=staging.ottg.co.uk python src/manage.py test functional_tests Found 3 test(s). [...] ... --------------------------------------------------------------------- Ran 3 tests in 13.537s OK
Deploying to Prod
So, let’s try using it for our live site!
$ pass:quotes[*ansible-playbook --user=elspeth -i www.ottg.co.uk, infra/deploy-playbook.yaml -vv*] [...] Done. Disconnecting from [email protected]... done.
Brrp brrp brpp. Looking good? Go take a click around your live site!
Git Tag the Release
One final bit of admin. In order to preserve a historical marker, we’ll use Git tags to mark the state of the codebase that reflects what’s currently live on the server:
$ git tag LIVE $ export TAG=$(date +DEPLOYED-%F/%H%M) # this generates a timestamp $ echo $TAG # should show "DEPLOYED-" and then the timestamp $ git tag $TAG $ git push origin LIVE $TAG # pushes the tags up to GitHub
Now it’s easy, at any time, to check what the difference is between our current codebase and what’s live on the servers. This will come in useful in a few chapters, when we look at database migrations. Have a look at the tag in the history:
$ git log --graph --oneline --decorate [...]
Once again, this use of git tags isn’t meant to be the One True Way. We just need some sort of way of keeping track of what was deployed when. |
Tell everyone!
You now have a live website! Tell all your friends! Tell your mum, if no one else is interested! Or, tell me! I’m always delighted to see a new reader’s site! [email protected]
In the next chapter, it’s back to coding again.
Further Reading
There’s no such thing as the One True Way in deployment; I’ve tried to set you off on a reasonably sane path, but there are plenty of things you could do differently, and lots, lots more to learn besides. Here are some resources I used for inspiration:
-
The original 12-factor App manifesto from the Heroku team
-
How to Write Deployment-Friendly Apps by Hynek Schlawack
-
The deployment chapter of Two Scoops of Django by Dan Greenfeld and Audrey Roy
Comments