• Python

    ,

    Justfiles

    ,

    Docker

    🐳 Using Just and Compose for interactive Django and Python debugging sessions

    When I wrote REST APIs, I spent weeks and months writing tests and debugging without looking at the front end. It’s all JSON, after all.

    For most of my projects, I will open two or three tabs. I’m running Docker Compose in tab one to see the logs as I work. I’ll use the following casey/just recipe to save some keystrokes and to standardize what running my project looks like:

    # tab 1
    $ just up 
    

    In my second tab, I’ll open a shell that is inside my main web or app container so that I can interact with the environment, run migrations, and run tests.

    We can nitpick the meaning of “console” here, but I tend to have another just recipe for “shell” which will open a Django shell using shell_plus or something more interactive:

    # tab 2
    $ just console
    

    In my third tab, I’ll run a shell session for creating git branches, switching git branches, stashing git changes, and running my linter, which I prefer to run by hand.

    # tab 3
    $ echo "I'm boring"
    

    Over the last year or two, the web has returned to doing more frontend work with Django and less with REST. Using ipdb, in my view, to figure out what’s going on has been really helpful. Trying to get ipdb to “just work” takes a few steps in my normal workflow.

    # tab 1 (probably)
    
    # start everything
    $ just start
    
    # stop our web container
    $ just stop web
    
    # start our web container with "--service-ports" 
    # just start-web-with-debug
    

    The only real magic here is using Docker’s --service-ports, which opens ports so we may connect to the open ipdb session when we open one in our view code.

    My main justfile for all of these recipes/workflows looks very similar to this:

    # justfile
    set dotenv-load := false
    
    @build *ARGS:
        docker compose build {{ ARGS }}
    
    # opens a console
    @console:
        docker compose run --rm --no-deps utility/bin/bash
    
    @down:
        docker compose down
    
    @start *ARGS:
        just up --detach {{ ARGS }}
    
    @start-web-with-debug:
        docker compose run --service-ports --rm web python -m manage runserver 0.0.0.0:8000
    
    @stop *ARGS:
        docker compose down {{ ARGS }}
    
    @up *ARGS:
        docker compose up {{ ARGS }}
    

    If you work on multiple projects, I encourage you to find patterns you can scale across them. Using Just, Make, shell scripts or even Python lightens the cognitive load when switching between them.

    Sunday June 30, 2024
  • Docker

    ,

    Postgres

    🐘 Docker Postgres Autoupgrades

    Upgrading Postgres in Docker environments can be daunting, but keeping your database up-to-date is essential for performance, security, and access to new features. While there are numerous guides on manually upgrading Postgres, the process can often be complex and error-prone. Fortunately, the pgautoupgrade Docker image simplifies this process, automating the upgrade dance for us.

    The Challenge of Upgrading Postgres

    For many developers, upgrading Postgres involves several manual steps: backing up data, migrating schemas, ensuring compatibility, and testing thoroughly. Mistakes during these steps can lead to downtime or data loss, making the upgrade process a nerve-wracking experience.

    The pgautoupgrade Docker image is designed to handle the upgrade process seamlessly. Using it in place of the base Postgres image allows you to automate the upgrade steps, reducing the risk of errors and saving valuable time.

    How to Use pgautoupgrade

    While you can use the pgautoupgrade directly with Docker, I prefer it as my default development image.

    I set my compose.yml config with pgautoupgrade similar to this config:

    # compose.yml
    services:
      db:
        image: "pgautoupgrade/pgautoupgrade:latest"
        volumes:
          - postgres_data:/var/lib/postgresql/data/
    # ...
    

    Instead of using the latest version of Postgres, pgautoupgrade can be set to a specific version. This is nice if you want to match whichever version of Postgres you use in production or if you have extensions that might not be ready to move.

    # compose.yml
    services:
      db:
        image: "pgautoupgrade/pgautoupgrade:16-alpine"
        volumes:
          - postgres_data:/var/lib/postgresql/data/
    # ...
    

    Overall, I’m happy with pgautoupgrade. Please note that using pgautoupgrade does not mean you should not make data backups.

    See my last article, 🐘 A Just recipe to back and restore a Postgres database to learn some tips on how to automate using pg_dump and pg_restore.

    Saturday June 29, 2024
  • Justfiles

    ,

    Docker

    ,

    Postgres

    🐘 A Just recipe to backup and restore a Postgres database

    I have used this casey/just recipe to help backup and restore my Postgres databases from my Docker containers.

    I work with a few machines, and it’s an excellent way to create a database dump from one machine and then restore it from another machine. I sometimes use it to test data migrations because restoring a database dump takes a few seconds.

    I have been migrating from Docker to OrbStack, and the only real pain point is moving data from one volume to another. I sometimes need to switch between the two, so I have recipes set to back up and restore my database from one context to another.

    # justfile
    
    DATABASE_URL := env_var_or_default('DATABASE_URL', 'postgres://postgres@db/postgres')
    
    # dump database to file
    @pg_dump file='db.dump':
        docker compose run \
            --no-deps \
            --rm \
            db \
            pg_dump \
                --dbname "{{ DATABASE_URL }}" \
                --file /code/{{ file }} \
                --format=c \
                --verbose
    
    # restore database dump from file
    @pg_restore file='db.dump':
        docker compose run \
            --no-deps \
            --rm \
            db \
            pg_restore \
                --clean \
                --dbname "{{ DATABASE_URL }}" \
                --if-exists \
                --no-owner \
                --verbose \
                /code/{{ file }}
    

    Shoutout to Josh Thomas for help on this recipe since we both iterated on this for several projects.

    Friday June 28, 2024
  • Justfiles

    ,

    Docker

    🐳 Managing Docker Compose Profiles with Just: Switching Between Default and Celery Configurations

    For a recent client project, we wanted to toggle between various Docker Compose profiles to run the project with or without Celery.

    Using Compose’s profiles option, we can label services that we may not want to start by default a label. This might look something like this:

    services:
    
      beat:
        profiles:
          - celery
        ...
    
      celery:
        profiles:
          - celery
        ...
    
    
      web:
        ...
    

    We use a casey/just justfile for some of our common workflows, and I realized I could set a COMPOSE_PROFILES environment variable to switch between running a “default” profile and a “celery” profile.

    Using just’s env_var_or_default feature, we can set both an ENV variable and a default value to fall back on for our project.

    # justfie 
    
    export COMPOSE_PROFILES := env_var_or_default('COMPOSE_PROFILES', 'default')
    
    @up *ARGS:
        docker compose up {{ ARGS }}
    
    # ... the rest of your justfile...
    
    

    To start our service without Celery, I would run:

    $ just up
    

    ` To start our service with Celery, I would run:

    $ export COMPOSE_PROFILES=celery
    $ just up
    

    Our COMPOSE_PROFILES environment variable will get passed into our just up recipe, and if we don’t include one, it will have a default value of default, which will skip running the Celery service.

    Tuesday June 25, 2024