buy the book ribbon

Debugging and Testing Server Issues

Popping a few layers off the stack of things we’re working on: we have nice wait-for helpers; what were we using them for? Oh yes, waiting to be logged in. And why was that? Ah yes, we had just built a way of pre-authenticating a user. Let’s see how that works against Docker and our staging server.

The Proof Is in the Pudding: Using Docker to Catch Final Bugs

Remember the deployment checklist from [chapter_18_second_deploy]? Let’s see if it can’t come in handy today!

First, we rebuild and start our Docker container locally, on port 8888:

$ docker build -t superlists . && docker run \
    -p 8888:8888 \
    --mount type=bind,source="$PWD/container.db.sqlite3",target=/home/nonroot/db.sqlite3 \
    -e DJANGO_SECRET_KEY=sekrit \
    -e DJANGO_ALLOWED_HOST=localhost \
    -e DJANGO_DB_PATH=/home/nonroot/db.sqlite3 \
    -it superlists
[...]
 => => naming to docker.io/library/superlists [...]
[2025-01-27 22:37:02 +0000] [7] [INFO] Starting gunicorn 22.0.0
[2025-01-27 22:37:02 +0000] [7] [INFO] Listening at: http://0.0.0.0:8888 (7)
[2025-01-27 22:37:02 +0000] [7] [INFO] Using worker: sync
[2025-01-27 22:37:02 +0000] [8] [INFO] Booting worker with pid: 8
If you see an error saying bind source path does not exist, you’ve lost your container database somehow. Create a new one with touch container.db.sqlite3.

Now let’s make sure our container database is fully up to date, by running migrate inside the container:

$ docker exec $(docker ps --filter=ancestor=superlists -q) python manage.py migrate
Operations to perform:
  Apply all migrations: accounts, auth, contenttypes, lists, sessions
Running migrations:
[...]
That little $(docker ps --filter=ancestor=superlists -q) is a neat way to avoid manually looking up the container ID. An alternative would be to just set the container name explicitly in our docker run commands, using --name.

And now, let’s do an FT run:

$ TEST_SERVER=localhost:8888 python src/manage.py test functional_tests
[...]
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate
element: #id_logout; [...]
[...]
AssertionError: 'Check your email' not found in 'Server Error (500)'
[...]
FAILED (failures=1, errors=1)

We can’t log in—​either with the real email system or with our pre-authenticated session. Looks like our nice new authentication system is crashing when we run it in Docker.

Let’s practice a bit of production debugging!

Inspecting the Docker Container Logs

When Django fails with a 500 or "unhandled exception" and DEBUG is off, it doesn’t print the tracebacks to your web browser. But it will send them to your logs instead.

Check Our Django LOGGING Settings

It’s worth double-checking at this point that your settings.py still contains the LOGGING settings that will actually send stuff to the console:

src/superlists/settings.py
LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "handlers": {
        "console": {"class": "logging.StreamHandler"},
    },
    "loggers": {
        "root": {"handlers": ["console"], "level": "INFO"},
    },
}

Rebuild and restart the Docker container if necessary, and then either rerun the FT, or just try to log in manually.

If you switch to the terminal that’s running your Docker image, you should see the traceback printed out in there:

Internal Server Error: /accounts/send_login_email
Traceback (most recent call last):
[...]
  File "/src/accounts/views.py", line 16, in send_login_email
    send_mail(
    ~~~~~~~~~^
        "Your login link for Superlists",
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<2 lines>...
        [email],
        ^^^^^^^^
    )
    ^
[...]
    self.connection.sendmail(
    ~~~~~~~~~~~~~~~~~~~~~~~~^
        from_email, recipients, message.as_bytes(linesep="\r\n")
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/local/lib/python3.14/smtplib.py", line 876, in sendmail
    raise SMTPSenderRefused(code, resp, from_addr)
smtplib.SMTPSenderRefused: (530, b'5.7.0 Authentication Required. [...]

Sure enough, that looks like a pretty good clue as to what’s going on: we’re getting a "sender refused" error when trying to send our email. Good to know our local Docker setup can reproduce the error on the server!

Another Environment Variable in Docker

So, Gmail is refusing to let us send emails, is it? Now why might that be? "Authentication required", you say? Oh, whoops; we haven’t told the server what our password is!

As you might remember from earlier chapters, our settings.py expects to get the email server password from an environment variable named EMAIL_PASSWORD:

src/superlists/settings.py
EMAIL_HOST_PASSWORD = os.environ.get("EMAIL_PASSWORD")

Let’s add this new environment variable to our local Docker container run command. First, set your email password in your terminal if you need to:

$ echo $EMAIL_PASSWORD
# if that's empty, let's set it:
$ export EMAIL_PASSWORD="yoursekritpasswordhere"

Now let’s pass that environment variable through to our Docker container using one more -e flag—this one fishing the env var out of the shell we’re in:

$ docker build -t superlists . && docker run \
    -p 8888:8888 \
    --mount type=bind,source="$PWD/container.db.sqlite3",target=/home/nonroot/db.sqlite3 \
    -e DJANGO_SECRET_KEY=sekrit \
    -e DJANGO_ALLOWED_HOST=localhost \
    -e DJANGO_DB_PATH=/home/nonroot/db.sqlite3 \
    -e EMAIL_PASSWORD \
    -it superlists
If you use -e without the =something argument, it sets the env var inside Docker to the same value set in the current shell. It’s like saying -e EMAIL_PASSWORD=$EMAIL_PASSWORD.

And now we can rerun our FT again. We’ll narrow it down to just the test_login test, because that’s the main one that has a problem:

$ TEST_SERVER=localhost:8888 python src/manage.py test functional_tests.test_login
[...]
ERROR: test_login_using_magic_link
(functional_tests.test_login.LoginTest.test_login_using_magic_link)
 ---------------------------------------------------------------------
Traceback (most recent call last):
  File "...goat-book/src/functional_tests/test_login.py", line 32, in
test_login_using_magic_link
    email = mail.outbox.pop()
IndexError: pop from empty list

Well, not a pass, but the tests do get a little further. It looks like our server can now send emails. (If you check the Docker logs, you’ll see there are no more errors.) But our FT is saying it can’t see any emails appearing in mail.outbox.

mail.outbox Won’t Work Outside Django’s Test Environment

The reason is that mail.outbox is a local, in-memory variable in Django, so that’s only going to work when our tests and our server are running in the same process—like they do with unit tests or with LiveServerTestCase FTs.

When we run against another process, be it Docker or an actual server, we can’t access the same mail.outbox variable. If we want to actually inspect the emails that the server sends we need another technique in our tests against Docker (or later, against the staging server).

Deciding How to Test "Real" Email Sending

This is a point at which we have to explore some trade-offs. There are a few different ways we could test email sending:

  1. We could build a "real" end-to-end test, and have our tests log in to an email server using the POP3 protocol to retrieve the email from there. That’s what I did in the first and second editions of this book.

  2. We can use a service like Mailinator or Mailsac, which gives us an email account to send to, along with APIs for checking what mail has been delivered.

  3. We can use an alternative, fake email backend whereby Django will save the emails to a file on disk, for example, and we can inspect them there.

  4. We could give up on testing email on the server. If we have a minimal smoke test confirming that the server can send emails, then we don’t need to test that they are actually delivered.

Testing strategy trade-offs lays out some of the pros and cons.

Table 1. Testing strategy trade-offs
Strategy Pros Cons

End-to-end with POP3

Maximally realistic, tests the whole system

Slow, fiddly, unreliable

Email testing service e.g., Mailinator/Mailsac

As realistic as real POP3, with better APIs for testing

Slow, possibly expensive (and I don’t want to endorse any particular commercial provider)

File-based fake email backend

Faster, more reliable, no network calls, tests end-to-end (albeit with fake components)

Still fiddly, requires managing database and filesystem side effects

Giving up on testing email on the server/Docker

Fast, simple

Less confidence that things work "for real"

We’re exploring a common problem in testing integration with external systems; how far should we go? How realistic should we make our tests?

In this case, I’m going to suggest we go for the last option, which is not to test email sending on the server or in Docker. Email itself is a well-understood protocol (reader, it’s been around since before I was born, and that’s a while ago now), and Django has supported sending email for more than a decade. So, I think we can afford to say, in this case, that the costs of building testing tools for email outweigh the benefits.

I’m going to suggest we stick to using mail.outbox when we’re running local tests, and we configure our FTs to just check that Docker (or, later, the staging server) seems to be able to send email (in the sense of "not crashing"). We can skip the bit where we check the email contents in our FT. Remember, we also have unit tests for the email content!

I explore some of the difficulties involved in getting these kinds of tests to work in Online Appendix: Functional Tests for External Dependencies, so go check that out if this feels like a bit of a cop-out!

Here’s where we can put an early return in the FT:

src/functional_tests/test_login.py (ch23l009)
    # A message appears telling her an email has been sent
    self.wait_for(
        lambda: self.assertIn(
            "Check your email",
            self.browser.find_element(By.CSS_SELECTOR, "body").text,
        )
    )

    if self.test_server:
        # Testing real email sending from the server is not worth it.
        return

    # She checks her email and finds a message
    email = mail.outbox.pop()

This test will still fail if you don’t set EMAIL_PASSWORD to a valid value in Docker or on the server, meaning it would still have warned us of the bug we started the chapter with—so that’s good enough for now.

Here’s how we populate the FunctionalTest.test_server attribute:

src/functional_tests/base.py (ch23l010)
class FunctionalTest(StaticLiveServerTestCase):
    def setUp(self):
        self.browser = webdriver.Firefox()
        self.test_server = os.environ.get("TEST_SERVER")  (1)
        if self.test_server:
            self.live_server_url = "http://" + self.test_server
1 We upgrade test_server to be an attribute on the test object, so we can access it in various places in our FTs (we’ll see several examples later). Sad to see our walrus go, though!

And you can confirm that the FT fails if you don’t set EMAIL_PASSWORD in Docker, or passes, if you do:

$ TEST_SERVER=localhost:8888 python src/manage.py test functional_tests.test_login
[...]

OK

Now let’s see if we can get our FTs to pass against the server. First, we’ll need to figure out how to set that env var on the server.

An Alternative Method for Setting Secret Environment Variables on the Server

In [chapter_12_ansible], we dealt with setting the SECRET_KEY by generating a random value, and then saving it to a file on the server. We could use a similar technique here. But, just to give you an alternative, I’ll show how to pass the environment variable directly up to the container, without storing it in a file:

infra/deploy-playbook.yaml (ch23l012)
        env:
          DJANGO_DEBUG_FALSE: "1"
          DJANGO_SECRET_KEY: "{{ secret_key.content | b64decode }}"
          DJANGO_ALLOWED_HOST: "{{ inventory_hostname }}"
          DJANGO_DB_PATH: "/home/nonroot/db.sqlite3"
          EMAIL_PASSWORD: "{{ lookup('env', 'EMAIL_PASSWORD') }}"  (1)
1 lookup() with env as its argument is how you look up local environment variables—i.e., the ones set on the computer you’re running Ansible from.

This means you’ll need the EMAIL_PASSWORD environment variable to be set on your local machine every time you want to run Ansible.

Let’s consider some pros and cons of the two approaches:

  • Saving the secret to a file on the server means you don’t need to "remember" or store the secret anywhere on your own machine.

  • In contrast, always passing it up from the local environment does mean you can change the value of the secret at any time.

  • In terms of security, they are fairly equivalent—in either case, the environment variable is visible via docker inspect.

If we rerun our full FT suite against the server, you should see that the login test passes, and we’re down to just one failure, in test_⁠l⁠o⁠g⁠g⁠e⁠d​_⁠i⁠n⁠_⁠u⁠s⁠e⁠r⁠s⁠_lists_are_saved_as_my_lists():

$ TEST_SERVER=staging.ottg.co.uk python src/manage.py test functional_tests
[...]
ERROR: test_logged_in_users_lists_are_saved_as_my_lists
(functional_tests.test_my_lists.MyListsTest.test_logged_in_users_lists_are_saved_[...]
----------------------------------------------------------------------
Traceback (most recent call last):
  File "...goat-book/src/functional_tests/test_my_lists.py", line 36, in
test_logged_in_users_lists_are_saved_as_my_lists
    self.wait_to_be_logged_in(email)
    ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
[...]
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate
element: #id_logout; [...]
[...]
 ---------------------------------------------------------------------

Ran 8 tests in 30.087s

FAILED (errors=1)

Let’s look into that next.

Debugging with SQL

Let’s switch back to testing locally against our Docker container:

$ TEST_SERVER=localhost:8888 python src/manage.py test functional_tests.test_my_lists
[...]
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate
element: #id_logout; [...]
FAILED (errors=1)

It looks like the attempt to create pre-authenticated sessions doesn’t work, so we’re not being logged in. Let’s do a bit of debugging with SQL.

First, try logging in to your local "runserver" instance (where things definitely work) and take a look in the normal local database, src/db.sqlite3:

$ sqlite3 src/db.sqlite3
SQLite version 3.43.2 2023-10-10 13:08:14
Enter ".help" for usage hints.

sqlite> select * from accounts_token;  (1)
1|[email protected]|11d3e26d-32a3-4434-af71-5e0f62fefc52
2|[email protected]|25a570c8-736f-42e4-931b-ed5c410b5b51

sqlite> select * from django_session;  (2)
tv2m5byccfs05gfpkc1l8k4pep097y3c|.eJxVjEsKg0AMQO-StcwBurI9gTcYYgwzo[...]
1 We can do a SELECT * in our tokens table to see some of the tokens we’ve been creating for our users.
2 And we can take a look in the django_session table. You should find the first column matches the session ID you’ll see in your DevTools.

Let’s do a bit of debugging. Take a look in container.db.sqlite3:

$ sqlite3 container.db.sqlite3
SQLite version 3.43.2 2023-10-10 13:08:14
Enter ".help" for usage hints.

sqlite> select * from accounts_token;  (1)

sqlite> select * from django_session;  (2)
1 The users table is empty. (If you do see [email protected] in here, it’s from a previous test run. Delete and re-create the database if you want to be sure.)
2 And the sessions table is definitely empty.

Now, let’s try manually. If you visit localhost:8888 and log in—getting the token from your email—you’ll see it works. You can also run f⁠u⁠n⁠c⁠t⁠i⁠o⁠n⁠a⁠l⁠_​t⁠e⁠s⁠t⁠s⁠.test_login and you’ll see that pass.

If we look in the database again, we’ll see some more data:

$ sqlite3 container.db.sqlite3
SQLite version 3.43.2 2023-10-10 13:08:14
Enter ".help" for usage hints.

sqlite> select * from accounts_token;
3|[email protected]|115812a3-7d37-485c-9c15-337b12293f69
4|[email protected]|a901bee9-88aa-4965-9277-a13723a6bfe1

sqlite> select * from django_session;
09df51nmvpi137mpv5bwjoghh2a4y5lh|.eJxVjEsKg0AMQO-[...]

So, there’s nothing fundamentally wrong with the Docker environment. It’s seems like it’s specifically our test utility function create_pre_authenticated_session() that isn’t working.

At this point, a little niggle in your head might be growing louder, reminding us of a problem we anticipated in the last chapter: LiveServerTestCase only lets us talk to the in-memory database. That’s where our pre-authenticated sessions are ending up!

Managing Fixtures in Real Databases

We need a way to make changes to the database inside Docker or on the server. Essentially, we want to run some code outside the context of the tests (and the test database) and in the context of the server and its database.

A Django Management Command to Create Sessions

When trying to build a standalone script that works with Django (i.e., can talk to the database and so on), there are some fiddly issues you need to get right, like setting the DJANGO_SETTINGS_MODULE environment variable and setting sys.path correctly.

Instead of messing about with all that, Django lets you create your own "management commands" (commands you can run with python manage.py), which will do all that path-mangling for you. They live in a folder called management/commands inside your apps:

$ mkdir -p src/functional_tests/management/commands
$ touch src/functional_tests/management/__init__.py
$ touch src/functional_tests/management/commands/__init__.py

The boilerplate in a management command is a class that inherits from django.core.management.BaseCommand, and that defines a method called handle:

src/functional_tests/management/commands/create_session.py (ch23l014)
from django.conf import settings
from django.contrib.auth import BACKEND_SESSION_KEY, SESSION_KEY, get_user_model
from django.contrib.sessions.backends.db import SessionStore
from django.core.management.base import BaseCommand

User = get_user_model()


class Command(BaseCommand):
    def add_arguments(self, parser):
        parser.add_argument("email")

    def handle(self, *args, **options):
        session_key = create_pre_authenticated_session(options["email"])
        self.stdout.write(session_key)


def create_pre_authenticated_session(email):
    user = User.objects.create(email=email)
    session = SessionStore()
    session[SESSION_KEY] = user.pk
    session[BACKEND_SESSION_KEY] = settings.AUTHENTICATION_BACKENDS[0]
    session.save()
    return session.session_key

We’ve taken the code for create_pre_authenticated_session from test_my_lists.py. handle will pick up an email address from the parser, and then return the session key that we’ll want to add to our browser cookies, and the management command prints it out at the command line.

Try it out:

$ python src/manage.py create_session [email protected]
Unknown command: 'create_session'. Did you mean clearsessions?

One more step: we need to add functional_tests to our settings.py so that it’s recognised as a real app that might have management commands as well as tests:

src/superlists/settings.py (ch23l015)
+++ b/superlists/settings.py
@@ -42,6 +42,7 @@ INSTALLED_APPS = [
     "accounts",
     "lists",
+    "functional_tests",
 ]
Beware of the security implications here. We’re now adding some remotely executable code for bypassing authentication to our default configuration. Yes, someone exploiting this would need to have already gained access to the server, so it was game over anyway, but nonetheless, this is a sensitive area. If you were doing something like this in a real application, you might consider adding an if environment != prod, or similar.

Now it works:

$ python src/manage.py create_session [email protected]
qnslckvp2aga7tm6xuivyb0ob1akzzwl
If you see an error saying the auth_user table is missing, you may need to run manage.py migrate. In case that doesn’t work, delete the db.sqlite3 file and run migrate again to get a clean slate.

Getting the FT to Run the Management Command on the Server

Next, we need to adjust test_my_lists so that it runs the local function when we’re using the local in-memory test server from LiveServerTestCase. And, if we’re running against the Docker container or staging server, it should run the management command instead.

src/functional_tests/test_my_lists.py (ch23l016)
from django.conf import settings

from .base import FunctionalTest
from .container_commands import create_session_on_server  (1)
from .management.commands.create_session import create_pre_authenticated_session


class MyListsTest(FunctionalTest):
    def create_pre_authenticated_session(self, email):
        if self.test_server:  (2)
            session_key = create_session_on_server(self.test_server, email)
        else:
            session_key = create_pre_authenticated_session(email)

        ## to set a cookie we need to first visit the domain.
        ## 404 pages load the quickest!
        self.browser.get(self.live_server_url + "/404_no_such_url/")
        self.browser.add_cookie(
            dict(
                name=settings.SESSION_COOKIE_NAME,
                value=session_key,
                path="/",
            )
        )

    [...]
1 Programming by wishful thinking, let’s imagine we’ll have a module called container_commands with a function called create_session_on_server() in it.
2 Here’s the if where we decide which of our two session-creation functions to execute.

Running Commands Using Docker Exec and (Optionally) SSH

You may remember docker exec from [chapter_09_docker]; it lets us run commands inside a running Docker container. That’s fine for when we’re running against the local Docker, but when we’re against the server, we need to SSH in first.

There’s a bit of plumbing here, but I’ve tried to break things down into small chunks:

src/functional_tests/container_commands.py (ch23l018)
import subprocess

USER = "elspeth"


def create_session_on_server(host, email):
    return _exec_in_container(
        host, ["/venv/bin/python", "/src/manage.py", "create_session", email]  (1)
    )


def _exec_in_container(host, commands):
    if "localhost" in host:  (2)
        return _exec_in_container_locally(commands)
    else:
        return _exec_in_container_on_server(host, commands)


def _exec_in_container_locally(commands):
    print(f"Running {commands} on inside local docker container")
    return _run_commands(["docker", "exec", _get_container_id()] + commands)  (3)


def _exec_in_container_on_server(host, commands):
    print(f"Running {commands!r} on {host} inside docker container")
    return _run_commands(
        ["ssh", f"{USER}@{host}", "docker", "exec", "superlists"] + commands  (4)
    )


def _get_container_id():
    return subprocess.check_output(  (5)
        ["docker", "ps", "-q", "--filter", "ancestor=superlists"]  (3)
    ).strip()


def _run_commands(commands):
    process = subprocess.run(  (5)
        commands,
        stdout=subprocess.PIPE,
        stderr=subprocess.STDOUT,
        check=False,
    )
    result = process.stdout.decode()
    if process.returncode != 0:
        raise Exception(result)
    print(f"Result: {result!r}")
    return result.strip()
1 We invoke our management command with the path to the virtualenv Python, the create_session command name, and pass in the email we want to create a session for.
2 We dispatch to two slightly different ways of running a command inside a container, with the assumption that a host on "localhost" is a local Docker container, and the others are on the staging server.
3 To run a command on the local Docker container, we’re going to use docker exec, and we have a little extra hop first to get the correct container ID.
4 To run a command on the Docker container that’s on the staging server, we still use docker exec, but we do it inside an SSH session. In this case we don’t need the container ID, because the container is always named "superlists".
5 Finally, we use Python’s subprocess module to actually run a command. You can see a couple of different ways of running it here, which differ based on how we’re handing errors and output; the details don’t matter too much.

Recap: Creating Sessions Locally Versus Staging

Does that all make sense? Perhaps a little ASCII-art diagram will help:

Locally:
+-----------------------------------+       +-------------------------------------+
| MyListsTest                       |       | .management.commands.create_session |
| .create_pre_authenticated_session |  -->  |  .create_pre_authenticated_session  |
|            (locally)              |       |             (locally)               |
+-----------------------------------+       +-------------------------------------+
Against Docker locally:
+-----------------------------------+             +-------------------------------------+
| MyListsTest                       |             | .management.commands.create_session |
| .create_pre_authenticated_session |             |  .create_pre_authenticated_session  |
|            (locally)              |             |            (in Docker)              |
+-----------------------------------+             +-------------------------------------+
            |                                                        ^
            v                                                        |
+----------------------------+                                       |
| server_tools               |     +-------------+     +----------------------------+
| .create_session_on_server  | --> | docker exec | --> | ./manage.py create_session |
|        (locally)           |     +-------------+     |       (in Docker)          |
+----------------------------+                         +----------------------------+
Against Docker on the server:
+-----------------------------------+             +-------------------------------------+
| MyListsTest                       |             | .management.commands.create_session |
| .create_pre_authenticated_session |             |  .create_pre_authenticated_session  |
|            (locally)              |             |            (on server)              |
+-----------------------------------+             +-------------------------------------+
            |                                                           ^
            v                                                           |
+----------------------------+                                          |
| server_tools               |    +-----+    +--------+    +----------------------------+
| .create_session_on_server  | -> | ssh | -> | docker | -> | ./manage.py create_session |
|        (locally)           |    |     |    |  exec  |    |        (on server)         |
+----------------------------+    +-----+    +--------+    +----------------------------+

We do love a bit of ASCII art now and again!

An Alternative for Managing Test Database Content: Talking Directly to the Database

An alternative way of managing database content inside Docker, or on a server, would be to talk directly to the database.

Because we’re using SQLite, that involves writing to the file directly. This can be fiddly to get right, because when we’re running inside Django’s test runner, Django takes over the test database creation, so you end up having to write raw SQL and manage your connections to the database directly.

There are also some tricky interactions with the filesystem mounts and Docker, as well as the need to have the SECRET_KEY env var set to the same value as on the server.

If we were using a "classic" database server like PostgreSQL or MySQL, we’d be able to talk directly to the database over its port, and that’s an approach I’ve used successfully in the past but it’s still quite tricky, and usually requires writing your own SQL.

Testing the Management Command

In any case, let’s see if this whole rickety pipeline works. First, locally, to check that we didn’t break anything:

$ python src/manage.py test functional_tests.test_my_lists
[...]
OK

Next, against Docker—rebuild first:

$ docker build -t superlists . && docker run \
    -p 8888:8888 \
    --mount type=bind,source="$PWD/container.db.sqlite3",target=/home/nonroot/db.sqlite3 \
    -e DJANGO_SECRET_KEY=sekrit \
    -e DJANGO_ALLOWED_HOST=localhost \
    -e DJANGO_DB_PATH=/home/nonroot/db.sqlite3 \
    -e EMAIL_PASSWORD \
    -it superlists

And then we run the FT (that uses our fixture) against Docker:

$ TEST_SERVER=localhost:8888 python src/manage.py test functional_tests.test_my_lists

[...]
OK

Next, we run it against the server. First, we re-deploy to make sure our code on the server is up to date:

$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -vv

And now we run the test:

$ TEST_SERVER=staging.ottg.co.uk python src/manage.py test \
 functional_tests.test_my_lists
Found 1 test(s).
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
Running '/venv/bin/python /src/manage.py create_session [email protected]' on
staging.ottg.co.uk inside docker container
Result: '7n032ogf179t2e7z3olv9ct7b3d4dmas\n'
.
 ---------------------------------------------------------------------
Ran 1 test in 4.515s

OK
Destroying test database for alias 'default'...

Looking good! We can rerun all the tests to make sure…​

$ TEST_SERVER=staging.ottg.co.uk python src/manage.py test functional_tests
[...]
[[email protected]] run:
~/sites/staging.ottg.co.uk/.venv/bin/python
[...]
Ran 8 tests in 89.494s

OK

Hooray!

Test Database Cleanup

One more thing to be aware of: now that we’re running against a real database, we don’t get cleanup for free any more. If you try running the tests twice—locally or against Docker—you’ll run into this error:

$ TEST_SERVER=localhost:8888 python src/manage.py test functional_tests.test_my_lists
[...]
django.db.utils.IntegrityError: UNIQUE constraint failed: accounts_user.email

It’s because the user we created the first time we ran the tests is still in the database. When we’re running against Django’s test database, Django cleans up for us. Let’s try and emulate that when we’re running against a real database:

src/functional_tests/container_commands.py (ch23l019)
def reset_database(host):
    return _exec_in_container(
        host, ["/venv/bin/python", "/src/manage.py", "flush", "--noinput"]
    )

And let’s add the call to reset_database() in our base test setUp() method:

src/functional_tests/base.py (ch23l020)
from .container_commands import reset_database
[...]

class FunctionalTest(StaticLiveServerTestCase):
    def setUp(self):
        self.browser = webdriver.Firefox()
        self.test_server = os.environ.get("TEST_SERVER")
        if self.test_server:
            self.live_server_url = "http://" + self.test_server
            reset_database(self.test_server)

If you try to run your tests again, you’ll find they pass happily:

$ TEST_SERVER=localhost:8888 python src/manage.py test functional_tests.test_my_lists
[...]

OK

Probably a good time for a commit! :)

Warning: Be Careful Not to Run Test Code Against the Production Server!

We’re in dangerous territory now that we have code that can directly affect a database on the server. You want to be very, very careful that you don’t accidentally blow away your production database by running FTs against the wrong host.

You might consider putting some safeguards in place at this point. You almost definitely want to put staging and production on different servers, for example, and make it so that they use different key pairs for authentication, with different passphrases.

I also mentioned not including the FT management commands in INSTALLED_APPS for production environments.

This is similarly dangerous territory to running tests against clones of production data. I could tell you a little story about accidentally sending thousands of duplicate invoices to clients, for example. LFMF! And tread carefully.

Wrap-Up

Actually getting your new code up and running on a server always tends to flush out some last-minute bugs and unexpected issues. We had to do a bit of work to get through them, but we’ve ended up with several useful things as a result.

We now have a lovely generic wait decorator, which will be a nice Pythonic helper for our FTs from now on. We’ve got some more robust logging configuration. We have test fixtures that work both locally and on the server, and we’ve come out with a pragmatic approach for testing email integration.

But before we can deploy our actual production site, we’d better actually give the users what they wanted—​the next chapter describes how to give them the ability to save their lists on a "My lists" page.

Lessons Learned Catching Bugs in Staging
It’s nice to be able to repro things locally.

The effort we put into adapting our app to use Docker is paying off. We discovered an issue in staging, and were able to reproduce it locally. That gives us the ability to experiment and get feedback much quicker than trying to do experiments on the server itself.

Fixtures also have to work remotely.

LiveServerTestCase makes it easy to interact with the test database using the Django ORM for tests running locally. Interacting with the database inside Docker is not so straightforward. One solution is docker exec and Django management commands, as I’ve shown, but you should explore what works for you—​connecting directly to the database over SSH tunnels, for example.

Be very careful when resetting data on your servers.

A command that can remotely wipe the entire database on one of your servers is a dangerous weapon, and you want to be really, really sure it’s never accidentally going to hit your production data.

Logging is critical to debugging issues on the server.

At the very least, you’ll want to be able to see any error messages that are being generated by the server. For thornier bugs, you’ll also want to be able to do the occasional "debug print", and see it end up in a file somewhere.

Comments