buy the book ribbon

Getting to a Production-Ready Deployment

Our deployment is working fine but it’s not production-ready. Let’s try to get it there, using the tests to guide us.

In a way we’re applying the Red-Green-Refactor cycle to our server deployment. Our hacky deployment got us to Green, and now we’re going to Refactor, working incrementally (just as we would while coding), trying to move from working state to working state, and using the FTs to detect any regressions.

What We Need to Do

What’s wrong with our hacky deployment? A few things: first, we need to host our app on the "normal" port 80 so that people can access it using a regular URL.

Perhaps more importantly, we shouldn’t use the Django dev server for production; it’s not designed for real-life workloads. Instead, we’ll use the popular combination of the Nginx web server and the Gunicorn Python/WSGI server.

Several settings in are currently unacceptable too. DEBUG=True, is strongly recommended against for production, and we’ll want to fix ALLOWED_HOSTS, and set a unique SECRET_KEY too.

Finally, we don’t want to have to SSH in to our server to actually start the site. Instead, we’ll write a Systemd config file so that it starts up automatically whenever the server (re)boots.

Let’s go through and see if we can fix each of these things one by one.

Switching to Nginx


We’ll need a real web server, and all the cool kids are using Nginx these days, so we will too. Having fought with Apache for many years, I can tell you it’s a blessed relief in terms of the readability of its config files, if nothing else!

Installing Nginx on my server was a matter of doing an apt install, and I could then see the default Nginx "Hello World" screen:

[email protected]:$ sudo apt install nginx
[email protected]:$ sudo systemctl start nginx

Now you should be able to go to the normal port-80 URL address of your server, and see the "Welcome to nginx" page at this point, as in Nginx—​it works!.

If you don’t see it, it may be because your firewall does not open port 80 to the world. On AWS, for example, you may need to configure the "security group" for your server to open port 80.
The default 'Welcome to nginx!' page
Figure 1. Nginx—​it works!

The FT Now Fails, But Show Nginx Is Running

We can also confirm that if we run the FT without specifying port 8000, we see them fail again—​one of them in particular should now mention Nginx:

$ python test functional_tests
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate
element: [id="id_new_item"]
AssertionError: 'To-Do' not found in 'Welcome to nginx!'

Next we’ll configure the Nginx web server to talk to Django

Simple Nginx Configuration

We create an Nginx config file to tell it to send requests for our staging site along to Django. A minimal config looks like this:

server: /etc/nginx/sites-available/
server {
    listen 80;

    location / {
        proxy_pass http://localhost:8000;

This config says it will only listen for our staging domain, and will "proxy" all requests to the local port 8000 where it expects to find Django waiting to respond.

I saved this to a file called inside the /etc/nginx/sites-available folder.

Not sure how to edit a file on the server? There’s always vi, which I’ll keep encouraging you to learn a bit of, but perhaps today is already too full of new things. Try the relatively beginner-friendly nano instead. Note you’ll also need to use sudo because the file is in a system folder.

We then add it to the enabled sites for the server by creating a symlink to it:

# reset our env var (if necessary)
[email protected]:$ export

[email protected]:$ cd /etc/nginx/sites-enabled/
[email protected]:$ sudo ln -s /etc/nginx/sites-available/$SITENAME $SITENAME

# check our symlink has worked:
[email protected]:$ readlink -f $SITENAME

That’s the Debian/Ubuntu preferred way of saving Nginx configurations—​the real config file in sites-available, and a symlink in sites-enabled; the idea is that it makes it easier to switch sites on or off.

We also may as well remove the default "Welcome to nginx" config, to avoid any confusion:

[email protected]:$ sudo rm /etc/nginx/sites-enabled/default

And now to test it. First we reload nginx and restart our server:

[email protected]:$ sudo systemctl reload nginx
[email protected]:$ cd ~/sites/$SITENAME
[email protected]:$ ./virtualenv/bin/python runserver 8000
If you ever find that Nginx isn’t behaving as expected, try the command sudo nginx -t, which does a config test and will warn you of any problems in your configuration files.

And now we can try our FTs without the port 8000:

$ ./ test functional_tests --failfast

Ran 3 tests in 10.718s


Hooray! Back to a working state.

I also had to edit /etc/nginx/nginx.conf and uncomment a line saying server_names_hash_bucket_size 64; to get my long domain name to work. You may not have this problem; Nginx will warn you when you do a reload if it has any trouble with its config files.
Tips on Debugging Nginx

Deployments are tricky! If ever things don’t go exactly as expected, here are a few tips and things to look out for, particularly around Nginx.

  • I’m sure you already have, but double-check that each file is exactly where it should be and has the right contents—​a single stray character can make all the difference.

  • Nginx error logs go into /var/log/nginx/error.log.

  • You can ask Nginx to "check" its config using the -t flag: nginx -t

  • Make sure your browser isn’t caching an out-of-date response. Use Ctrl-Refresh, or start a new private browser window.

  • This may be clutching at straws, but I’ve sometimes seen inexplicable behaviour on the server that’s only been resolved when I fully restarted it with a sudo reboot.

If you ever get completely stuck, there’s always the option of blowing away your server and starting again from scratch! It should go faster the second time…​

Switching to Gunicorn

Do you know why the Django mascot is a pony? The story is that Django comes with so many things you want: an ORM, all sorts of middleware, the admin site…​ "What else do you want, a pony?" Well, Gunicorn stands for "Green Unicorn", which I guess is what you’d want next if you already had a pony…​

[email protected]:$ ./virtualenv/bin/pip install gunicorn

Gunicorn will need to know a path to a WSGI server, which is usually a function called application. Django provides one in superlists/

[email protected]:$ ./virtualenv/bin/gunicorn superlists.wsgi:application
2013-05-27 16:22:01 [10592] [INFO] Starting gunicorn
2013-05-27 16:22:01 [10592] [INFO] Listening at: (10592)

But if we run the functional tests, once again you’ll see that they are warning us of a problem. The test for adding list items passes happily, but the test for layout + styling fails. Good job, tests!

$ python test functional_tests
AssertionError: 117.0 != 512 within 10 delta
FAILED (failures=1)

And indeed, if you take a look at the site, you’ll find the CSS is all broken, as in Broken CSS.

The reason that the CSS is broken is that although the Django dev server will serve static files magically for you, Gunicorn doesn’t. Now is the time to tell Nginx to do it instead.

The site is up, but CSS is broken
Figure 2. Broken CSS

One step forward, one step backward, but once again we’ve identified the problem nice and early. Moving on!

At this point if you see a "502 - Bad Gateway", it’s probably because you forgot to restart Gunicorn.

Getting Nginx to Serve Static Files

First we run collectstatic to copy all the static files to a folder where Nginx can find them:

[email protected]:$ ./virtualenv/bin/python collectstatic --noinput
15 static files copied to '/home/elspeth/sites/'
[email protected]:$ ls static/
base.css  bootstrap

Now we tell Nginx to start serving those static files for us, by adding a second location directive to the config:

server: /etc/nginx/sites-available/
server {
    listen 80;

    location /static {
        alias /home/elspeth/sites/;

    location / {
        proxy_pass http://localhost:8000;

Reload Nginx and restart Gunicorn…​

[email protected]:$ sudo systemctl reload nginx
[email protected]:$ ./virtualenv/bin/gunicorn superlists.wsgi:application

And if you take another manual look at your site, things should look much healthier. Let’s rerun our FTs:

$ python test functional_tests

Ran 3 tests in 10.718s



Switching to Using Unix Sockets

When we want to serve both staging and live, we can’t have both servers trying to use port 8000. We could decide to allocate different ports, but that’s a bit arbitrary, and it would be dangerously easy to get it wrong and start the staging server on the live port, or vice versa.

A better solution is to use Unix domain sockets—​they’re like files on disk, but can be used by Nginx and Gunicorn to talk to each other. We’ll put our sockets in /tmp. Let’s change the proxy settings in Nginx:

server: /etc/nginx/sites-available/
server {
    listen 80;

    location /static {
        alias /home/elspeth/sites/;

    location / {
        proxy_pass http://unix:/tmp/;

Now we restart Gunicorn, but this time telling it to listen on a socket instead of on the default port:

[email protected]:$ sudo systemctl reload nginx
[email protected]:$ ./virtualenv/bin/gunicorn --bind \
    unix:/tmp/ superlists.wsgi:application

And again, we rerun the functional test again, to make sure things still pass:

$ python test functional_tests

Hooray, a change that went without a hitch for once![1] Moving on…​

Using Environment Variables to Adjust Settings for Production

We know there are several things in that we want to change for production:

  • ALLOWED_HOSTS is currently set to "*" which isn’t secure. We want it to be set to only match the site we’re supposed to be serving (

  • DEBUG mode is all very well for hacking about on your own server, but leaving those pages full of tracebacks available to the world isn’t secure.

  • SECRET_KEY is used by Django uses for some of its crypto—​things like cookies and CSRF protection. It’s good practice to make sure the secret key on the server is different from the one in your source code repo, because that code might be visible to strangers. We’ll want to generate a new, random one but then keep it the same for the foreseeable future (find out more in the Django docs).

Development, staging and live sites always have some differences in their configuration. Environment variables are a good place to store those different settings. See "the 12-factor app".[2]

Here’s one way to make it work:

superlists/ (ch08l004)
if 'DJANGO_DEBUG_FALSE' in os.environ:  (1)
    DEBUG = False
    SECRET_KEY = os.environ['DJANGO_SECRET_KEY']  (2)
    ALLOWED_HOSTS = [os.environ['SITENAME']]  (2)
    DEBUG = True  (3)
    SECRET_KEY = 'insecure-key-for-dev'
1 We say we’ll use an environment variable called DJANGO_DEBUG_FALSE to switch debug mode off, and in effect require production settings (it doesn’t matter what we set it to, just that it’s there).
2 And now we say that, if debug mode is off, we require the SECRET_KEY and ALLOWED_HOSTS to be set by two more environment variables (one of which can be the $SITENAME variable we’ve been using at the command-line so far).
3 Otherwise we fall-back to the insecure, debug mode settings that are useful for Dev.

There are other ways you might set up the logic, making various variables optional, but I think this gives us a little bit of protection against accidentally forgetting to set one. The end result is that you don’t need to set any of them for dev, but production needs all three, and it will error if any are missing.

Better to fail hard than allow a typo in an environment variable name to leave you running with insecure settings.

Let’s do our usual dance of committing locally, and pushing to GitHub:

$ git commit -am "use env vars for prod settings DEBUG, ALLOWED_HOSTS, SECRET_KEY"
$ git push

Then pull it down on the server, export a couple of environment variables, and restart Gunicorn:

[email protected]:$ git pull
[email protected]:$ export DJANGO_DEBUG_FALSE=y DJANGO_SECRET_KEY=abc123
# we'll set the secret to something more secure later!
[email protected]:$ ./virtualenv/bin/gunicorn --bind \
    unix:/tmp/ superlists.wsgi:application

And use a test run to reassure ourselves that things still work…​

$ ./ test functional_tests --failfast
AssertionError: 'To-Do' not found in ''

Oops. Let’s take a look manually: An ugly 400 error.

An unfriendly page showing 400 Bad Request
Figure 3. An ugly 400 error

Essential Googling the Error Message

Something’s gone wrong. But once again, by running our FTs frequently, we’re able to identify the problem early, before we’ve changed too many things. In this case the only thing we’ve changed is We’ve changed three settings—which one might be at fault?

Let’s use the tried and tested "Googling the error message" technique (An indispensable publication (source:

Cover of a fake O’Reilly book called Googling the Error Message
Figure 4. An indispensable publication (source:

The very first link in my search results for Django 400 Bad Request suggests that a 400 error is usually to do with ALLOWED_HOSTS. In the last chapter we had a nice Django Debug page saying "DisallowedHost error" ([django-disallowedhosts-error]), but now because we have DEBUG=False, we just get the minimal, unfriendly 400 page.

But what’s wrong with ALLOWED_HOSTS? After double-checking it for typos, we might do a little more Googling with some relevant keywords: Django ALLOWED_HOSTS Nginx. Once again, the first result gives us the clue we need.

Fixing ALLOWED_HOSTS with Nginx: passing on the Host header

The problem turns out to be that, by default, Nginx strips out the Host headers from requests it forwards, and it makes it "look like" they came from localhost after all. We can tell it to forward on the original host header by adding the proxy_set_header directive:

server: /etc/nginx/sites-available/
server {
    listen 80;

    location /static {
        alias /home/elspeth/sites/;

    location / {
        proxy_pass http://unix:/tmp/;
        proxy_set_header Host $host;

Reload Nginx once more:

[email protected]:$ sudo systemctl reload nginx

And then we try our FTs again:

$ python test functional_tests

Phew. Back to working again.

Using a .env File to Store Our Environment Variables

Another little refactor. Setting environment variables manually in various shells is a pain, and it’d be nice to have them all available in a single place. The Python world (and other people out there too) seems to be standardising around using the convention of a file called .env in the project root.

First we add .env to our .gitignore—this file is going to be used for secrets, and we don’t ever want them ending up on GitHub:

$ echo .env >> .gitignore
$ git commit -am"gitignore .env file"
$ git push

Next let’s save our environment on the server:

[email protected]:$ pwd
[email protected]:$ echo DJANGO_DEBUG_FALSE=y >> .env
[email protected]:$ echo SITENAME=$SITENAME >>.env
The way I’ve used the environment files in means that the .env file is not required on your own machine, only in staging/production.

Generating a secure SECRET_KEY

While we’re at it we’ll also generate a more secure secret key using a little Python one-liner.

[email protected]:$ echo DJANGO_SECRET_KEY=$(
python3.7 -c"import random; print(''.join(random.SystemRandom().
choices('abcdefghijklmnopqrstuvwxyz0123456789', k=50)))"
) >> .env
[email protected]:$ cat .env

Now let’s check our env file works, and restart gunicorn:

[email protected]:$ echo $DJANGO_DEBUG_FALSE-none
[email protected]:$ set -a; source .env; set +a
[email protected]:$ echo $DJANGO_DEBUG_FALSE-none
[email protected]:$ ./virtualenv/bin/gunicorn --bind \
    unix:/tmp/$SITENAME.socket superlists.wsgi:application

And we rerun our FTs to check that they agree, everything still works:

$ python test functional_tests

Excellent! That went without a hitch :)

I’ve shown the use of a .env file and manually extracting environment variables in, but there are some plugins that do this stuff for you that are definitely worth investigating. Look into django-environ, django-dotenv, and Pipenv.

Using Systemd to Make Sure Gunicorn Starts on Boot

Our final step is to make sure that the server starts up Gunicorn automatically on boot, and reloads it automatically if it crashes. On Ubuntu, the way to do this is using Systemd.

Here’s what a Systemd config file looks like

server: /etc/systemd/system/
Description=Gunicorn server for

Restart=on-failure  (1)
User=elspeth  (2)
WorkingDirectory=/home/elspeth/sites/  (3)
EnvironmentFile=/home/elspeth/sites/  (4)

ExecStart=/home/elspeth/sites/ \
    --bind unix:/tmp/ \
    superlists.wsgi:application  (5)

[Install] (6)

Systemd is joyously simple to configure (especially if you’ve ever had the dubious pleasure of writing an init.d script), and is fairly self-explanatory.

1 Restart=on-failure will restart the process automatically if it crashes.
2 User=elspeth makes the process run as the "elspeth" user.
3 WorkingDirectory sets the current working directory.
4 EnvironmentFile points Systemd towards our .env file and tells it to load environment variables from there.
5 ExecStart is the actual process to execute. I’m using the \ line continuation characters to split the full command over multiple lines, for readability, but it could all go on one line.
6 WantedBy in the [Install] section is what tells Systemd we want this service to start on boot.

Systemd scripts live in /etc/systemd/system, and their names must end in .service.

Now we tell Systemd to start Gunicorn with the systemctl command:

# this command is necessary to tell Systemd to load our new config file
[email protected]:$ sudo systemctl daemon-reload
# this command tells Systemd to always load our service on boot
[email protected]:$ sudo systemctl enable
# this command actually starts our service
[email protected]:$ sudo systemctl start

(You should find the systemctl command responds to tab completion, including of the service name, by the way.)

Now we can rerun the FTs to see that everything still works. You can even test that the site comes back up if you reboot the server!

$ python test functional_tests
More Debugging Tips and Commands

A few more places to look and things to try, now that we’ve introduced Gunicorn and Systemd into the mix, should things not go according to plan:

  • You can check the Systemd logs using sudo journalctl -u

  • You can ask Systemd to check the validity of your service configuration: systemd-analyze verify /path/to/my.service.

  • Remember to restart both services whenever you make changes.

  • If you make changes to the Systemd config file, you need to run daemon-reload before systemctl restart to see the effect of your changes.

Saving Our Changes: Adding Gunicorn to Our requirements.txt

Back in the local copy of your repo, we should add Gunicorn to the list of packages we need in our virtualenvs:

$ pip install gunicorn
$ pip freeze | grep gunicorn >> requirements.txt
$ git commit -am "Add gunicorn to virtualenv requirements"
$ git push
On Windows, at the time of writing, Gunicorn would pip install quite happily, but it wouldn’t actually work if you tried to use it. Thankfully we only ever run it on the server, so that’s not a problem. And, Windows support is being discussed…​

Thinking About Automating

Let’s recap our provisioning and deployment procedures:

  1. Assume we have a user account and home folder

  2. add-apt-repository ppa:deadsnakes/ppa && apt update

  3. apt install nginx git python3.7 python3.7-venv

  4. Add Nginx config for virtual host

  5. Add Systemd job for Gunicorn (including unique SECRET_KEY)

  1. Create directory in ~/sites

  2. Pull down source code

  3. Start virtualenv in virtualenv

  4. pip install -r requirements.txt

  5. migrate for database

  6. collectstatic for static files

  7. Restart Gunicorn job

  8. Run FTs to check everything works

Assuming we’re not ready to entirely automate our provisioning process, how should we save the results of our investigation so far? I would say that the Nginx and Systemd config files should probably be saved somewhere, in a way that makes it easy to reuse them later. Let’s save them in a new subfolder in our repo.

Saving Templates for Our Provisioning Config Files

First, we create the subfolder:

$ mkdir deploy_tools

Here’s a generic template for our Nginx config:

server {
    listen 80;
    server_name DOMAIN;

    location /static {
        alias /home/elspeth/sites/DOMAIN/static;

    location / {
        proxy_pass http://unix:/tmp/DOMAIN.socket;
        proxy_set_header Host $host;

And here’s one for the Gunicorn Sytemd service:

Description=Gunicorn server for DOMAIN


ExecStart=/home/elspeth/sites/DOMAIN/virtualenv/bin/gunicorn \
    --bind unix:/tmp/DOMAIN.socket \


Now it’s easy for us to use those two files to generate a new site, by doing a find and replace on DOMAIN.

For the rest, just keeping a few notes is OK. Why not keep them in a file in the repo too?

Provisioning a new site

## Required packages:

* nginx
* Python 3.6
* virtualenv + pip
* Git

eg, on Ubuntu:

    sudo add-apt-repository ppa:deadsnakes/ppa
    sudo apt update
    sudo apt install nginx git python3.7 python3.7-venv

## Nginx Virtual Host config

* see nginx.template.conf
* replace DOMAIN with, e.g.,

## Systemd service

* see gunicorn-systemd.template.service
* replace DOMAIN with, e.g.,

## Folder structure:

Assume we have a user account at /home/username

└── sites
    ├── DOMAIN1
    │    ├── .env
    │    ├── db.sqlite3
    │    ├── etc
    │    ├── static
    │    └── virtualenv
    └── DOMAIN2
         ├── .env
         ├── db.sqlite3
         ├── etc

We can do a commit for those:

$ git add deploy_tools
$ git status # see three new files
$ git commit -m "Notes and template config files for provisioning"

Our source tree will now look something like this:

├── deploy_tools
│   ├── gunicorn-systemd.template.service
│   ├── nginx.template.conf
│   └──
├── functional_tests
│   ├── [...]
├── lists
│   ├──
│   ├──
│   ├── [...]
│   ├── static
│   │   ├── base.css
│   │   └── bootstrap
│   │       ├── [...]
│   ├── templates
│   │   ├── base.html
│   │   ├── [...]
│   ├──
│   ├──
│   └──
├── requirements.txt
├── static
│   ├── [...]
├── superlists
│   ├── [...]
└── virtualenv
    ├── [...]

Saving Our Progress

Being able to run our FTs against a staging server can be very reassuring. But, in most cases, you don’t want to run your FTs against your "real" server. In order to "save our work", and reassure ourselves that the production server will work just as well as the real server, we need to make our deployment process repeatable.

Automation is the answer, and it’s the topic of the next chapter.

Production-Readiness for Server Deployments

A few things to think about when trying to build a production-ready server environment:

Don’t use the Django dev server in production

Something like Gunicorn or uWSGI is a better tool for running Django; it will let you run multiple workers, for example.

Don’t use Django to serve your static files

There’s no point in using a Python process to do the simple job of serving static files. Nginx can do it, but so can other web servers like Apache or uWSGI.

Check your for dev-only settings

DEBUG=True, ALLOWED_HOSTS and SECRET_KEY are the ones we came across, but you will probably have others (we’ll see more when we start to send emails from the server).


A serious discussion of server security is beyond the scope of this book, and I’d warn against running your own servers without learning a good bit more about it. (One reason people choose to use a PaaS to host their code is that it means a slightly fewer security issues to worry about.) If you’d like a place to start, here’s as good a place as any: My first 5 minutes on a server. I can definitely recommend the eye-opening experience of installing fail2ban and watching its logfiles to see just how quickly it picks up on random drive-by attempts to brute force your SSH login. The internet is a wild place!

General Server Debugging Tips

The most important lesson to remember from this chapter is to work incrementally, make one change at a time, and run your tests frequently.

When things (inevitably) go wrong, resist the temptation to flail about and make other unrelated changes in the hope that things will start working again; instead, stop, go backward if necessary to get to a working state, and figure out what went wrong before moving forward again.

It’s just as easy to fall into the Refactoring-Cat trap on the server!

1. If you’re using Fedora/CentOS, you may run into an issue with private tmp directories. more info here
2. Another common way of handling this is to have different versions of for dev and prod. That can work fine too, but it can get confusing to manage. Environment variables also have the advantage of working for non-Django stuff too…​