• Python

    ,

    Docker

    ,

    UV

    ,

    Today I Learned

    📓 My notes on publishing a Python package with UV and building a custom GitHub Action for files-to-claude-xml

    My new Python application files-to-claude-xml is now on PyPI, which means they are packaged and pip installable. My preferred way of running files-to-claude-xml is via UV’s tool run, which will install it if it still needs to be installed and then execute it.

    $ uv tool run files-to-claude-xml --version
    

    Publishing on PyPi with UV

    UV has both build and publish commands, so I took them for a spin today.

    uv build just worked, and a Python package was built.

    When I tried uv publish, it prompted me for some auth settings for which I had to log in to PyPI to create a token.

    I added those to my local ENV variables I manage with direnv.

    export UV_PUBLISH_PASSWORD=<your-PyPI-token-here>
    export UV_PUBLISH_USERNAME=__token__
    

    Once both were set and registered, uv publish published my files on PyPI.

    GitHub Action

    To make files-to-claude-xml easier to run on GitHub, I created a custom action to build a _claude.xml from the GitHub repository.

    To use this action, I wrote this example workflow, which runs from files-to-claude-xml-example

    name: Convert Files to Claude XML
    
    
    on:
      push
    
    
    jobs:
      convert-to-xml:
        runs-on: ubuntu-latest
        steps:
        - uses: actions/checkout@v4
        - name: Convert files to Claude XML
          uses: jefftriplett/files-to-claude-xml-action@main
          with:
            files: |
              README.md
              main.py          
            output: '_claude.xml'
            verbose: 'true'
        - name: Upload XML artifact
          uses: actions/upload-artifact@v4
          with:
            name: claude-xml
            path: _claude.xml
    

    My GitHub action is built with a Dockerfile, which installs files-to-claude-xml.

    # Dockerfile
    FROM ghcr.io/astral-sh/uv:bookworm-slim
    
    
    ENV UV_LINK_MODE=copy
    
    
    RUN --mount=type=cache,target=/root/.cache/uv \
        --mount=type=bind,source=uv.lock,target=uv.lock \
        --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
        uv sync --frozen --no-install-project
    
    
    WORKDIR /app
    
    
    ENTRYPOINT ["uvx", "files-to-claude-xml"]
    

    To turn a GitHub repository into a runnable GitHub Action, an action.yml file needs to exist in the repository. This file describes the input arguments and which Dockerfile or command to run.

    # action.yml
    name: 'Files to Claude XML'
    description: 'Convert files to XML format for Claude'
    inputs:
      files:
        description: 'Input files to process'
        required: true
        type: list
      output:
        description: 'Output XML file path'
        required: false
        default: '_claude.xml'
      verbose:
        description: 'Enable verbose output'
        required: false
        default: 'false'
      version:
        description: 'Display the version number'
        required: false
        default: 'false'
    runs:
      using: 'docker'
      image: 'Dockerfile'
      args:
        - ${{ join(inputs.files, ' ') }}
        - --output
        - ${{ inputs.output }}
        - ${{ inputs.verbose == 'true' && '--verbose' || '' }}
        - ${{ inputs.version == 'true' && '--version' || '' }}
    

    Overall, this works. Claude’s prompting helped me figure it out, which felt fairly satisfying given the goal of files-to-claude-xml.

    Wednesday October 16, 2024
  • Django

    ,

    Python

    ,

    Justfiles

    ,

    Docker

    ,

    Today I Learned

    🐳 Using Just and Compose for interactive Django and Python debugging sessions

    When I wrote REST APIs, I spent weeks and months writing tests and debugging without looking at the front end. It’s all JSON, after all.

    For most of my projects, I will open two or three tabs. I’m running Docker Compose in tab one to see the logs as I work. I’ll use the following casey/just recipe to save some keystrokes and to standardize what running my project looks like:

    # tab 1
    $ just up 
    

    In my second tab, I’ll open a shell that is inside my main web or app container so that I can interact with the environment, run migrations, and run tests.

    We can nitpick the meaning of “console” here, but I tend to have another just recipe for “shell” which will open a Django shell using shell_plus or something more interactive:

    # tab 2
    $ just console
    

    In my third tab, I’ll run a shell session for creating git branches, switching git branches, stashing git changes, and running my linter, which I prefer to run by hand.

    # tab 3
    $ echo "I'm boring"
    

    Over the last year or two, the web has returned to doing more frontend work with Django and less with REST. Using ipdb, in my view, to figure out what’s going on has been really helpful. Trying to get ipdb to “just work” takes a few steps in my normal workflow.

    # tab 1 (probably)
    
    # start everything
    $ just start
    
    # stop our web container
    $ just stop web
    
    # start our web container with "--service-ports" 
    # just start-web-with-debug
    

    The only real magic here is using Docker’s --service-ports, which opens ports so we may connect to the open ipdb session when we open one in our view code.

    My main justfile for all of these recipes/workflows looks very similar to this:

    # justfile
    set dotenv-load := false
    
    @build *ARGS:
        docker compose build {{ ARGS }}
    
    # opens a console
    @console:
        docker compose run --rm --no-deps utility/bin/bash
    
    @down:
        docker compose down
    
    @start *ARGS:
        just up --detach {{ ARGS }}
    
    @start-web-with-debug:
        docker compose run --service-ports --rm web python -m manage runserver 0.0.0.0:8000
    
    @stop *ARGS:
        docker compose down {{ ARGS }}
    
    @up *ARGS:
        docker compose up {{ ARGS }}
    

    If you work on multiple projects, I encourage you to find patterns you can scale across them. Using Just, Make, shell scripts or even Python lightens the cognitive load when switching between them.

    Sunday June 30, 2024
  • Docker

    ,

    Postgres

    🐘 Docker Postgres Autoupgrades

    Upgrading Postgres in Docker environments can be daunting, but keeping your database up-to-date is essential for performance, security, and access to new features. While there are numerous guides on manually upgrading Postgres, the process can often be complex and error-prone. Fortunately, the pgautoupgrade Docker image simplifies this process, automating the upgrade dance for us.

    The Challenge of Upgrading Postgres

    For many developers, upgrading Postgres involves several manual steps: backing up data, migrating schemas, ensuring compatibility, and testing thoroughly. Mistakes during these steps can lead to downtime or data loss, making the upgrade process a nerve-wracking experience.

    The pgautoupgrade Docker image is designed to handle the upgrade process seamlessly. Using it in place of the base Postgres image allows you to automate the upgrade steps, reducing the risk of errors and saving valuable time.

    How to Use pgautoupgrade

    While you can use the pgautoupgrade directly with Docker, I prefer it as my default development image.

    I set my compose.yml config with pgautoupgrade similar to this config:

    # compose.yml
    services:
      db:
        image: "pgautoupgrade/pgautoupgrade:latest"
        volumes:
          - postgres_data:/var/lib/postgresql/data/
    # ...
    

    Instead of using the latest version of Postgres, pgautoupgrade can be set to a specific version. This is nice if you want to match whichever version of Postgres you use in production or if you have extensions that might not be ready to move.

    # compose.yml
    services:
      db:
        image: "pgautoupgrade/pgautoupgrade:16-alpine"
        volumes:
          - postgres_data:/var/lib/postgresql/data/
    # ...
    

    Overall, I’m happy with pgautoupgrade. Please note that using pgautoupgrade does not mean you should not make data backups.

    See my last article, 🐘 A Just recipe to back and restore a Postgres database to learn some tips on how to automate using pg_dump and pg_restore.

    Saturday June 29, 2024
  • Justfiles

    ,

    Docker

    ,

    Postgres

    🐘 A Just recipe to backup and restore a Postgres database

    I have used this casey/just recipe to help backup and restore my Postgres databases from my Docker containers.

    I work with a few machines, and it’s an excellent way to create a database dump from one machine and then restore it from another machine. I sometimes use it to test data migrations because restoring a database dump takes a few seconds.

    I have been migrating from Docker to OrbStack, and the only real pain point is moving data from one volume to another. I sometimes need to switch between the two, so I have recipes set to back up and restore my database from one context to another.

    # justfile
    
    DATABASE_URL := env_var_or_default('DATABASE_URL', 'postgres://postgres@db/postgres')
    
    # dump database to file
    @pg_dump file='db.dump':
        docker compose run \
            --no-deps \
            --rm \
            db \
            pg_dump \
                --dbname "{{ DATABASE_URL }}" \
                --file /code/{{ file }} \
                --format=c \
                --verbose
    
    # restore database dump from file
    @pg_restore file='db.dump':
        docker compose run \
            --no-deps \
            --rm \
            db \
            pg_restore \
                --clean \
                --dbname "{{ DATABASE_URL }}" \
                --if-exists \
                --no-owner \
                --verbose \
                /code/{{ file }}
    

    Shoutout to Josh Thomas for help on this recipe since we both iterated on this for several projects.

    Friday June 28, 2024
  • Justfiles

    ,

    Docker

    🐳 Managing Docker Compose Profiles with Just: Switching Between Default and Celery Configurations

    For a recent client project, we wanted to toggle between various Docker Compose profiles to run the project with or without Celery.

    Using Compose’s profiles option, we can label services that we may not want to start by default a label. This might look something like this:

    services:
    
      beat:
        profiles:
          - celery
        ...
    
      celery:
        profiles:
          - celery
        ...
    
    
      web:
        ...
    

    We use a casey/just justfile for some of our common workflows, and I realized I could set a COMPOSE_PROFILES environment variable to switch between running a “default” profile and a “celery” profile.

    Using just’s env_var_or_default feature, we can set both an ENV variable and a default value to fall back on for our project.

    # justfie 
    
    export COMPOSE_PROFILES := env_var_or_default('COMPOSE_PROFILES', 'default')
    
    @up *ARGS:
        docker compose up {{ ARGS }}
    
    # ... the rest of your justfile...
    
    

    To start our service without Celery, I would run:

    $ just up
    

    ` To start our service with Celery, I would run:

    $ export COMPOSE_PROFILES=celery
    $ just up
    

    Our COMPOSE_PROFILES environment variable will get passed into our just up recipe, and if we don’t include one, it will have a default value of default, which will skip running the Celery service.

    Tuesday June 25, 2024