• Django

    ,

    Python

    🐘 Django Migration Operations aka how to rename Models

    Renaming a table in Django seems more complex than it is. Last week, a client asked me how much pain it might be to rename a Django model from Party to Customer. We already used the model’s verbose_name, so it has been referencing the new name for months.

    Renaming the model should be as easy as renaming the model while updating any foreign key and many-to-many field references in other models and then running Django’s make migrations sub-command to see where we are at.

    The main issue with this approach is that Django will attempt to create a new table first, update model references, and then drop the old table.

    Unfortunately, Django will either fail mid-way through this migration and roll the changes back or even worse, it may complete the migration only for you to discover that your new table is empty.

    Deleting data is not what we want to happen.

    As it turns out, Django supports a RenameModel migration option, but it did not prompt me to ask if we wanted to rename Party to Customer.

    I am also more example-driven, and the Django docs don’t have an example of how to use RenameModel. Thankfully, this migration operation is about as straightforward as one can imagine: class RenameModel(old_model_name, new_model_name)

    I re-used the existing migration file that Django created for me. I dropped the CreateModel and DeleteModel operations, added a RenameField operation, and kept the RenameField operations which resulted in the following migration:

    from django.db import migrations
    
    
    class Migration(migrations.Migration):
    
        dependencies = [
            ('resources', '0002_alter_party_in_the_usa'),
        ]
    
        operations = [
            migrations.RenameModel('Party', 'Customer'),
            migrations.RenameField('Customer', 'party_number', 'customer_number'),
            migrations.RenameField('AnotherModel', 'party', 'customer'),
        ]
    

    The story’s moral is that you should always check and verify that your Django migrations will perform as you expect before running them in production. Thankfully, we did, even though glossing over them is easy.

    I also encourage you to dive deep into the areas of the Django docs where there aren’t examples. Many areas of the docs may need examples or even more expanded docs, and they are easy to gloss over or get intimidated by.

    You don’t have to be afraid to create and update your migrations by hand. After all, Django migrations are Python code designed to give you a jumpstart. You can and should modify the code to meet your needs. Migration Operations have a clean API once you dig below the surface and understand what options you have to work with.

    Monday July 15, 2024
  • Python

    🦆 DuckDB may be the tool you didn't know you were missing

    🤔 I haven’t fully figured out DuckDB yet, but it’s worth trying out if you are a Python dev who likes to work on data projects or gets frequently tasked with data import projects.

    DuckDB is a fast database engine that lets you read CSV, Parquet, and JSON files and query them using SQL. Instead of importing data into your database, DuckDB enables you to write SQL and run it against these file types.

    I have a YouTube to frontmatter project that can read a YouTube playlist and write out each video to a markdown file. I modified the export script to save the raw JSON output to disk.

    I used DuckDB to read a bunch of JSON files using the following script:

    import duckdb
    
    def main():
        result = duckdb.sql("SELECT id,snippet FROM read_json('data/*.json')").fetchall()
    
        for row in result:
            id, snippet = row
            print(f"{id=}")
            print(snippet["channelTitle"])
            print(snippet["title"])
            print(snippet["publishedAt"])
            print(snippet["description"])
            print()
    
    
    if __name__ == "__main__":
        main()
    

    This script accomplishes several things:

    • It reads over 650 JSON files in about one second.
    • It uses SQL to query the JSON data directly.
    • It extracts specific fields (id and snippet) from each JSON file.

    Performance and Ease of Use

    The speed at which DuckDB processes these files is remarkable. In traditional setups, reading and parsing this many JSON files could take significantly longer and require more complex code.

    When to Use DuckDB

    DuckDB shines in scenarios where you need to:

    • Quickly analyze data in files without a formal import process.
    • Perform SQL queries on semi-structured data (like JSON)
    • Process large datasets efficiently on a single machine.

    Conclusion

    DuckDB is worth trying out in your data projects. If you have a lot of data and you need help with what to do with it, being able to write SQL against hundreds of files is powerful and flexible.

    Saturday July 13, 2024
  • Django

    ,

    Python

    Django Extensions is useful even if you only use show_urls

    Yes, Django Extensions package is worth installing, especially for its show_urls command, which can be very useful for debugging and understanding your project’s URL configurations.

    Here’s a short example of how to use it because I sometimes want to include a link to the Django Admin in a menu for staff users, and I am trying to remember what name I need to reference to link to it.

    First, you will need to install it via:

    pip install django-extensions
    
    # or if you prefer using uv like me:
    uv pip install django-extensions
    

    Next, you’ll want to add django_extensions to your INSTALLED_APPS in your settings.py file:

    INSTALLED_APPS = [
        ...
        "django_extensions",
    ]
    

    Finally, to urn the show_urls management command you may do some by running your manage.py script and passing it the following option:

    $ python -m manage show_urls
    

    Which will give this output:

    $ python -m manage show_urls | grep admin
    ...
    /admin/	django.contrib.admin.sites.index	admin:index
    /admin/<app_label>/	django.contrib.admin.sites.app_index	admin:app_list
    /admin/<url>	django.contrib.admin.sites.catch_all_view
    # and a whole lot more...
    

    In this case, I was looking for admin:index which I can now add to my HTML document this menu link/snippet:

    ... 
    <a href="{% url 'admin:index' %}">Django Admin</a>
    ... 
    

    What I like about this approach is that I can now hide or rotate the url pattern I’m using to get to my admin website, and yet Django will always link to the correct one.

    Saturday July 6, 2024
  • Python

    ,

    Justfiles

    ,

    Docker

    🐳 Using Just and Compose for interactive Django and Python debugging sessions

    When I wrote REST APIs, I spent weeks and months writing tests and debugging without looking at the front end. It’s all JSON, after all.

    For most of my projects, I will open two or three tabs. I’m running Docker Compose in tab one to see the logs as I work. I’ll use the following casey/just recipe to save some keystrokes and to standardize what running my project looks like:

    # tab 1
    $ just up 
    

    In my second tab, I’ll open a shell that is inside my main web or app container so that I can interact with the environment, run migrations, and run tests.

    We can nitpick the meaning of “console” here, but I tend to have another just recipe for “shell” which will open a Django shell using shell_plus or something more interactive:

    # tab 2
    $ just console
    

    In my third tab, I’ll run a shell session for creating git branches, switching git branches, stashing git changes, and running my linter, which I prefer to run by hand.

    # tab 3
    $ echo "I'm boring"
    

    Over the last year or two, the web has returned to doing more frontend work with Django and less with REST. Using ipdb, in my view, to figure out what’s going on has been really helpful. Trying to get ipdb to “just work” takes a few steps in my normal workflow.

    # tab 1 (probably)
    
    # start everything
    $ just start
    
    # stop our web container
    $ just stop web
    
    # start our web container with "--service-ports" 
    # just start-web-with-debug
    

    The only real magic here is using Docker’s --service-ports, which opens ports so we may connect to the open ipdb session when we open one in our view code.

    My main justfile for all of these recipes/workflows looks very similar to this:

    # justfile
    set dotenv-load := false
    
    @build *ARGS:
        docker compose build {{ ARGS }}
    
    # opens a console
    @console:
        docker compose run --rm --no-deps utility/bin/bash
    
    @down:
        docker compose down
    
    @start *ARGS:
        just up --detach {{ ARGS }}
    
    @start-web-with-debug:
        docker compose run --service-ports --rm web python -m manage runserver 0.0.0.0:8000
    
    @stop *ARGS:
        docker compose down {{ ARGS }}
    
    @up *ARGS:
        docker compose up {{ ARGS }}
    

    If you work on multiple projects, I encourage you to find patterns you can scale across them. Using Just, Make, shell scripts or even Python lightens the cognitive load when switching between them.

    Sunday June 30, 2024
  • Python

    🚜 Mastodon Bookmark exporter to Markdown/Frontmatter

    I wrote a Mastodon Bookmark exporter tool over the weekend and decided to polish it up and release it tonight.

    I wrote the tool to help me sort out Mastodon posts that I might bookmark to follow up on or write about. I bookmark posts on the go or even from bed, and when I have time, I will pull them back up.

    The Mastodon Bookmark exporter tool reads your Mastodon bookmarks and exports the latest posts to a markdown/frontmatter file.

    I’m releasing the project as a gist under the PolyForm Noncommercial License for personal reasons. If you have licensing questions, contact me directly or through www.revsys.com for commercial inquiries, and we can work something out.

    Monday June 24, 2024
  • Python

    🐍 TIL build-and-inspect-python-package GitHub Action workflow plus some bonus Nox + Tox

    TIL: via @joshthomas via @treyhunner via @hynek about the hynek/build-and-inspect-python-package GitHub Action. 

    This workflow makes it possible for GitHub Actions to read your Python version classifiers to build a matrix or, as Trey put it, “Remove so much junk” which is a pretty good example. 

    As a bonus, check out Hynek’s video on NOX vs TOX – WHAT are they for & HOW do you CHOOSE? 🐍 

    https://www.youtube.com/watch?v=ImBvrDvK-1U

    Both Nox and Tox are great tools that automate testing in multiple Python environments. 

    I prefer Nox because it uses Python to write configs, which fits my brain better. I used Tox for over a decade, and there are some tox.ini files that I dread updating because I can only remember how I got here after a few hours of tinkering. That’s not Tox’s fault. I think that’s just a limitation of ini files and the frustration that comes from being unable to use Python when you have a complex matrix to try and sort out. 

    I recommend trying them out and using the best tool for your brain. There is no wrong path here.

    PS: Thank you, Josh, for bringing this to my attention.

    Friday May 10, 2024
  • Django

    ,

    Python

    🤖 Super Bot Fight 🥊

    In March, I wrote about my robots.txt research and how I started proactively and defensively blocking AI Agents in my 🤖 On Robots.txt. Since March, I have updated my Django projects to add more robots.txt rules.

    Earlier this week, I ran across this Blockin’ bots. blog post and this example, the mod_rewrite rule blocks AI Agents via their User-Agent strings.

    <IfModule mod_rewrite.c>
    RewriteEngine on
    RewriteBase /
    # block “AI” bots
    RewriteCond %{HTTP_USER_AGENT} (AdsBot-Google|Amazonbot|anthropic-ai|Applebot|AwarioRssBot|AwarioSmartBot|Bytespider|CCBot|ChatGPT|ChatGPT-User|Claude-Web|ClaudeBot|cohere-ai|DataForSeoBot|Diffbot|FacebookBot|FacebookBot|Google-Extended|GPTBot|ImagesiftBot|magpie-crawler|omgili|Omgilibot|peer39_crawler|PerplexityBot|YouBot) [NC]
    RewriteRule ^ – [F]
    </IfModule>
    

    Since none of my projects use Apache, and I was short on time, I decided to leave this war to the bots.

    Django Middleware

    I asked ChatGPT to convert this snippet to a piece of Django Middleware called Super Bot Fight. After all, if we don’t have time to keep up with bots, then we could leverage this technology to help fight against them.

    In theory, this snippet passed my eyeball test and was good enough:

    # middleware.py
    
    from django.http import HttpResponseForbidden
    
    # List of user agents to block
    
    BLOCKED_USER_AGENTS = [
        "AdsBot-Google",
        "Amazonbot",
        "anthropic-ai",
        "Applebot",
        "AwarioRssBot",
        "AwarioSmartBot",
        "Bytespider",
        "CCBot",
        "ChatGPT",
        "ChatGPT-User",
        "Claude-Web",
        "ClaudeBot",
        "cohere-ai",
        "DataForSeoBot",
        "Diffbot",
        "FacebookBot",
        "Google-Extended",
        "GPTBot",
        "ImagesiftBot",
        "magpie-crawler",
        "omgili",
        "Omgilibot",
        "peer39_crawler",
        "PerplexityBot",
        "YouBot",
    ]
    
    class BlockBotsMiddleware:
    
        def __init__(self, get_response):
            self.get_response = get_response
    
        def __call__(self, request):
            # Check the User-Agent against the blocked list
            user_agent = request.META.get("HTTP_USER_AGENT", "")
            if any(bot in user_agent for bot in BLOCKED_USER_AGENTS):
                return HttpResponseForbidden("Access denied")
            response = self.get_response(request)
            return response
    

    To use this middleware, you would update your Django settings.py to add it to your MIDDLEWARE setting.

    # settings.py
    
    MIDDLEWARE = [
        ...
        "middleware.BlockBotsMiddleware",
        ...
    ]
    

    Tests?

    If this middleware works for you and you care about testing, then these tests should also work:

    
    import pytest
    
    from django.http import HttpRequest
    from django.test import RequestFactory
    
    from middleware import BlockBotsMiddleware
    
    @pytest.mark.parametrize("user_agent, should_block", [
        ("AdsBot-Google", True),
        ("Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)", False),
        ("ChatGPT-User", True),
        ("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3", False),
    ])
    def test_user_agent_blocking(user_agent, should_block):
        # Create a request factory to generate request instances
        factory = RequestFactory()
        request = factory.get('/', HTTP_USER_AGENT=user_agent)
    
        # Middleware setup
        middleware = BlockBotsMiddleware(get_response=lambda request: HttpResponse())
        response = middleware(request)
    
        # Check if the response should be blocked or allowed
        if should_block:
            assert response.status_code == 403, f"Request with user agent '{user_agent}' should be blocked."
        else:
            assert response.status_code != 403, f"Request with user agent '{user_agent}' should not be blocked."
    
    

    Enhancements

    To use this code in production, I would normalize the user_agent and BLOCKED_USER_AGENTS variables to be case-insensitive.

    I would also consider storing my list of user agents in a Django model or using a project like django-robots instead of a hard-coded Python list.

    Thursday April 18, 2024
  • Django

    ,

    Python

    🚜 Refactoring and fiddling with Django migrations for pending pull requests 🐘

    One of Django’s most powerful features is the ORM, which includes a robust migration framework. One of Django’s most misunderstood features is Django migrations because it just works 99% of the time.

    Even when working solo, Django migrations are highly reliable, working 99.9% of the time and offering better uptime than most web services you may have used last week.

    The most common stumbling block for developers of all skill levels is rolling back a Django migration and prepping a pull request for review.

    I’m not picky about pull requests or git commit history because I default to using the “Squash and merge” feature to turn all pull request commits into one merge commit. The merge commit tells me when, what, and why something changed if I need extra context.

    I am pickier about seeing >2 database migrations for any app unless a data migration is involved. It’s common to see 4 to 20 migrations when someone works on a database feature for a week. Most of the changes tend to be fiddly, where someone adds a field, renames the field, renames it again, and then starts using it, which prompts another null=True change followed by a blank=True migration.

    For small databases, none of this matters.

    For a database with 10s or 100s of millions of records, these small changes can cause minutes of downtime per migration, which amounts to a throwaway change. While there are ways to mitigate most migration downtime situations, that’s different from my point today.

    I’m also guilty of being fiddly with my Django model changes because I know I can delete and refactor them before requesting approval. The process I use is probably worth sharing because once every new client comes up.

    Let’s assume I am working on Django News Jobs, and I am looking over my pull request one last time before I ask someone to review it. That’s when I noticed four migrations that could quickly be rebuilt into one, starting with my 0020* migration in my jobs app.

    The rough steps that I would do are:

    # step 1: see the state of our migrations
    $ python -m manage showmigrations jobs
    jobs
     [X] 0001_initial
     ...
     [X] 0019_alter_iowa_versus_unconn
     [X] 0020_alter_something_i_should_delete
     [X] 0021_alter_uconn_didnt_foul
     [X] 0022_alter_nevermind_uconn_cant_rebound
     [X] 0023_alter_iowa_beats_uconn
     [X] 0024_alter_south_carolina_sunday_by_four
    
    # step 2: rollback migrations to our last "good" state
    $ python -m manage migrate jobs 0019
    
    # step 3: delete our new migrations
    $ rm jobs/migrations/002*
    
    # step 4: rebuild migrations 
    python -m manage makemigrations jobs 
    
    # step 5: profit 
    python -m manage migrate jobs
    

    95% of the time, this is all I ever need to do.

    Occasionally, I check out another branch with conflicting migrations, and I’ll get my local database in a weird state.

    In those cases, check out the --fake (“Mark migrations as run without actually running them.") and --prune (“Delete nonexistent migrations from the django_migrations table.") options. The fake and prune operations saved me several times when my django_migrations table was out of sync, and I knew that SQL tables were already altered.

    What not squashmigrations?

    Excellent question. Squashing migrations is wonderful if you care about keeping every or most of the operations each migration is doing. Most of the time, I do not, so I overlook it.

    Saturday April 6, 2024
  • Django

    ,

    Python

    ⛳ Syncing Django Waffle feature flags

    The django-waffle feature flag library is helpful for projects where we want to release and test new features in production and have a controlled rollout. I also like using feature flags for resource-intensive features on a website that we want to toggle off during high-traffic periods. It’s a nice escape hatch to fall back on if we need to turn off a feature and roll out a fix without taking down your website.

    While Waffle is a powerful tool, I understand the challenge of keeping track of feature flags in both code and the database. It’s a pain point that many of us have experienced.

    Waffle has a WAFFLE_CREATE_MISSING_FLAGS=True setting that we can use to tell Waffle to create any missing flags in the database should it find one. While this helps discover which flags our application is using, we need to figure out how to clean up old flags in the long term.

    The pattern I landed on combines storing all our known feature flags and a note about what they do in our main settings file.

    # settings.py
    ... 
    
    WAFFLE_CREATE_MISSING_FLAGS=True
    
    WAFFLE_FEATURE_FLAGS = {
       "flag_one": "This is a note about flag_one",
       "flag_two": "This is a note about flag_two",
    }
    

    We will use a management command to sync every feature flag we have listed in our settings file, and then we will clean up any missing feature flags.

    # management/commands/sync_feature_flags.py
    import djclick as click
    
    from django.conf import settings
    from waffle.models import Flag
    
    
    @click()
    def command():
        # Create flags that don't exist
        for name, note in settings.WAFFLE_FEATURE_FLAGS.items():
            flag, created = Flag.objects.update_or_create(
                name=name, defaults={"note": note}
            )
            if created:
                print(f"Created flag {name} ({flag.pk})")
    
        # Delete flags that are no longer registered in settings
        for flag in Flag.objects.exclude(name__in=settings.FEATURE_FLAGS.keys()):
            flag.delete()
            print(f"Deleted flag {flag.name} ({flag.pk})")
    
    

    We can use the WAFFLE_CREATE_MISSING_FLAGS settings as a failsafe to create any flags we might have accidently missed. They will stick out because they will not have a note associated with them.

    This pattern is also helpful in solving similar problems for scheduled tasks, which might also store their schedules in the database.

    Check out this example in the Django Styleguide for how to sync Celery’s scheduled tasks.

    Friday April 5, 2024
  • Django

    ,

    Python

    ⬆️ The Upgrade Django project

    Upgrade Django is a REVSYS project we created six years ago and launched three years ago.

    The goal of Upgrade Django was to create a resource that made it easy to see at a glance which versions of the Django web framework are maintained and supported. We also wanted to catalog every release and common gotchas and link to helpful information like release notes, blog posts, and the tagged git branch on GitHub.

    We also wanted to make it easier to tell how long a given version of Django would be supported and what phase of its release cycle it is in.

    Future features

    We have over a dozen features planned, but it’s a project that primarily serves its original purpose.

    One feature on my list is that I’d love to see every backward incompatible change between two Django versions. This way, if someone knows their website is running on Django 3.2, they could pick Django 4.2 or Django 5.0 version and get a comprehensive list with links to everything they need to upgrade between versions.

    Projects like Upgrade Django are fun to work on because once you collect a bunch of data and start working with it, new ways of comparing and presenting the information become more apparent.

    If you have ideas for improving Upgrade Django that would be useful to your needs, we’d love to hear about them.

    Thursday April 4, 2024
  • Django

    ,

    Python

    Things I can never remember how to do: Django Signals edition

    I am several weeks into working on a project with my colleague, Lacey Henschel. Today, while reviewing one of her pull requests, I was reminded how to test a Django Signal via mocking.

    Testing Django signals is valuable to me because I need help remembering how to test a signal, and even with lots of effort, it never works. So bookmark this one, friends. It works.

    Thankfully, she wrote it up in one of her TIL: How I set up django-activity-stream, including a simple test

    https://mastodon.social/@lacey@hachyderm.io

    Monday March 25, 2024
  • Django

    ,

    Python

    On scratching itches with Python

    Python is such a fantastic glue language. Last night, while watching March Madness basketball games, I had a programming itch I wanted to scratch.

    I dusted off a demo I wrote several years ago. It used Python’s subprocess module, which strings together a bunch of shell commands to perform a git checkout, run a few commands, and then commit the results. The script worked, but I struggled to get it fully working in a production environment.

    To clean things up and as an excuse to try out a new third-party package, I converted the script to use:

    • GitPython - GitPython is a Python library used to interact with Git repositories.

    • Shelmet - A shell power-up for working with the file system and running subprocess commands.

    • Django Q2 - A multiprocessing distributed task queue for Django based on Django-Q.

    Using Django might have been overkill, but having a Repository model to work with felt nice. Django Q2 was also overkill, but if I put this app into production, I’ll want a task queue, and Django Q2 has a manageable amount of overhead.

    GitPython was a nice improvement over calling git commands directly because their API makes it easier to see which files were modified and to check against existing branch names. I was happy with the results after porting my subprocess commands to the GitPython API.

    The final package I used is a new package called Shelmet, which was both a nice wrapper around subprocess plus they have a nice API for file system operations in the same vein as Python’s Pathlib module.

    Future goals

    I was tempted to cobble together a GitHub bot, but I didn’t need one. I might dabble with the GitHub API more to fork a repo, but for now, this landed in a better place, so when I pick it back up again in a year, I’m starting in a good place.

    If you want to write a GitHub bot, check out Mariatta’s black_out project.

    Saturday March 23, 2024
  • Django

    ,

    Python

    Automated Python and Django upgrades

    Recently, I have been maintaining forks for several projects that are no longer maintained. Usually, these are a pain to update, but I have found a workflow that takes the edge off by leveraging pre-commit.

    My process:

    • Fork the project on GitHub to whichever organization I work with or my personal account.
    • Check out a local copy of my forked copy with git.
    • Install pre-commit
    • Create a .pre-commit-config.yaml with ZERO formatting or lint changes. This file will only include django-upgrade and pyupgrade hooks.

    We skip the formatters and linters to avoid unnecessary changes if we want to open a pull request in the upstream project. If the project isn’t abandoned, we will want to do that.

    • For django-upgrade, change the—-target-version option to target the latest version of Django I’m upgrading to, which is currently 5.0.
    • For pyupgrade, update the python settings under default_language_version to the latest version of Python that I’m targetting. Currently, that’s 3.12.

    The django-upgrade and pyupgrade projects attempt to run several code formatters and can handle most of the more tedious upgrade steps.

    • Run pre-commit autoupdate to ensure we have the latest version of our hooks.
    • Run pre-commit run --all-files to run pyupgrade and django-upgrade on our project.
    • Run any tests contained in the project and review all changes.
    • Once I’m comfortable with the changes, I commit them all via git and push them upstream to my branch.

    Example .pre-commit-config.yaml config

    From my experience, less is more with this bane bones .pre-commit-config.yaml config file.

    # .pre-commit-config.yaml
    
    default_language_version:
      python: python3.12
    
    repos:
      - repo: https://github.com/asottile/pyupgrade
        rev: v3.15.1
        hooks:
          - id: pyupgrade
    
      - repo: https://github.com/adamchainz/django-upgrade
        rev: 1.16.0
        hooks:
          - id: django-upgrade
            args: [--target-version, "5.0"]
    

    If I’m comfortable that the project is abandoned, I’ll add ruff support with a more opinionated config to ease my maintenance burden going forward.

    Friday March 22, 2024
  • Python

    Justfile Alfred Plugin

    A few years back, I had a productivity conversation with Jay Miller about Alfred plugins, which led to him sharing his Bunch_Alfred plugin. At the time, I played around with the Bunch.app, a macOS automation tool, and Alfred’s support was interesting.

    I created my Alfred plugin to run Just command runner commands through my Alfred setup. However, I never got around to packing or writing the plugin’s documentation.

    My Alfred plugin runs Script Filter Input, which reads from a centrally located justfile and generates JSON output of all of the possible options. This will be displayed, and Alfred will run that command, whichever option you select.

    Alfred plugin showing a Just command with a list of recipe options to pick from.

    I was always unhappy with how the JSON document was generated from my commands, so I dusted off the project over lunch and re-engineered it by adding Pydantic support.

    Alfred just announced support for a new User Interface called Text View, which could make text and markdown output from Python an exciting way to handle snippets and other productive use cases. I couldn’t quite figure it out over lunch, but now I know it’s possible, and I might figure out how to convert my Justfile Alfred plugin to generate better output.

    Tuesday March 19, 2024
  • Python

    Python's UV tool is even better

    Last month, I wrote Python’s UV tool is actually pretty good about Astral’s new Python package installer and resolver uv, and this is a follow-up post.

    Since last month, I have added uv to over a dozen projects, and I recently learned that you could skip the venv step for projects that use containers or CI where the environment is already isolated.

    I mistakenly thought uv required a virtual environment (aka venv), but Josh Thomas recently pointed out that it’s unnecessary.

    The trick is to pass the --system option, and uv will perform a system-wide install. Here’s an example:

    uv pip install --system --requirement requirement.txt
    

    Now that I have seen this, I wish pip also used this approach to avoid developers accidentally installing third-party packages globally.

    local development

    Nothing has changed with my justfile example from last month.

    When I’m working with containers, I create a virtual environment (venv) because I will need most of my project requirements installed outside of the container so that my text editor and LSP can resolve dependencies. uv’s default behavior of respecting a venv is all we need here.

    Every one of my projects has a justfile (it’s like Make but works the same everywhere) with “bootstrap” and “lock” recipes. My “bootstrap” recipe installs everything I need to work with the project locally. I use my “lock” recipe to lock my requirements.txt file to use the exact requirements locally and in production.

    justfile before

    My justfile might look like this:

    @bootstrap
        python -m pip install --upgrade pip
        python -m pip install --upgrade --requirement requirements.in
        
    @lock *ARGS:
        python -m piptools compile {{ ARGS }} ./requirements.in \
            --resolver=backtracking \
            --output-file requirements.txt
    

    justfile after

    For the most part, uv shares most of the same syntax as pip so you can start by changing your pip references to uv pip:

    @bootstrap
        python -m pip install --upgrade pip uv
        python -m uv pip install --upgrade --requirement requirements.in
        
    @lock *ARGS:
        python -m uv pip compile {{ ARGS }} requirements.in \
            --resolver=backtracking \
            --output-file requirements.txt
    

    Dockerfiles

    Everyone’s container setup is going to be different, but I use Docker and Orbstack, which use a Dockerfile.

    Dockerfile before

    FROM python:3.12-slim-bookworm
    
    ENV PIP_DISABLE_PIP_VERSION_CHECK 1
    ENV PYTHONDONTWRITEBYTECODE 1
    ENV PYTHONPATH /srv
    ENV PYTHONUNBUFFERED 1
    
    RUN apt-get update
    
    RUN pip install --upgrade pip
    
    COPY requirements.txt /src/requirements.txt
    
    RUN pip install --requirement /src/requirements.txt
    
    WORKDIR /src/
    

    Dockerfile after

    FROM python:3.12-slim-bookworm
    
    ENV PIP_DISABLE_PIP_VERSION_CHECK 1
    ENV PYTHONDONTWRITEBYTECODE 1
    ENV PYTHONPATH /srv
    ENV PYTHONUNBUFFERED 1
    
    RUN apt-get update
    
    RUN pip install --upgrade pip uv  # this is updated
    
    COPY requirements.txt /src/requirements.txt
    
    RUN uv pip install --system --requirement /src/requirements.txt  # this is updated
    
    WORKDIR /src/
    

    GitHub Actions

    GitHub Actions are a little more complicated to explain, but my workflows started similar to this before I made the switch to uv:

    main.yml before

      - name: Set up Python 3.12
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'
    
      - name: Install dependencies
        run: |
                python -m pip install --requirement requirements.in
    

    main.yml after

    The most significant pain point I ran into was related to GitHub Issue #1386, which has a useable workaround.

      - name: Set up Python 3.12
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'
    
      - name: Install dependencies
        run: |
          python -m pip install --upgrade uv  # this is new
          python -m uv pip install --system --requirement requirements.in  # this is updated      
    

    Gotchas

    The only gotchas I have encountered with uv is when I’m trying to install a Python package from a remote zip file.

    Previously, I could copy and paste the GitHub repo URL, but uv required we use the format package-name @ url-to-zip-file

    requirements.in before

    # requirements.in
    https://github.com/jefftriplett/django-feedreader/archive/main.zip
    

    requirements.in after

    # requirements.in
    django-feedreader @ https://github.com/jefftriplett/django-feedreader/archive/main.zip
    

    Conclusion

    This update helps remove a few steps from updating your projects, and it should shave a few minutes off of updating projects to use it.

    I hope this was helpful to anyone who is considering making the switch to uv. I love to hear about how much time it saves you.

    Thursday March 14, 2024
  • Django

    ,

    Python

    On environment variables and dotenv files

    Brett Cannon recently vented some frustrations about .env files.

    I still hate .env files and their lack of a standard

    https://mastodon.social/@brettcannon@fosstodon.org/112056455108582204

    Brett’s thread and our conversation reminded me that my rule for working with dotenv files is to have my environment load them instead of my Python app trying to read from the .env file directly.

    What is a .env (dotenv) file?

    A .env (aka dotenv) is a file that contains a list of key-value pairs in the format of {key}=value.

    At a basic level, this is what a bare minimum .env file might look for in a Django project.

    # .env
    DEBUG=true
    SECRET_KEY=you need to change this
    

    My go-to library for reading ENV variables is environs. While the environs library can read directly from a dotenv file, don’t do that. I never want my program to read from a file in production because I don’t want a physical file with all of my API keys and secrets.

    Most hosting providers, like Fly.io, have a command line interface for setting these key-value pairs in production to avoid needing a physical dotenv file.

    Instead, we should default to assuming that the ENV variables will bet in our environment, and we should fall back to either a reasonable default value or fail loudly.

    Using the environs library, my Django settings.py file tends to look like this:

    # settings.py
    import environs
    
    env = environs.Env()
    
    # this will default to False if not set.
    DEBUG = env.bool("DJANGO_DEBUG", default=False)
    
    # this will error loudly if not set
    SECRET_KEY = env.str("SECRET_KEY")
    
    # everything else... 
    

    I lean on Docker Compose for local development when I’m building web apps because I might have three to five services running. Compose can read a dotenv file and register them into environment variables.

    .envrc files aren’t .env files

    On my macOS, when I’m not developing in a container, I use the direnv application to read an .envrc file which is very similar to a dotenv file.

    A .envrc is very similar to a .env file, but to register the values into memory, you have to use Bash’s export convention. If you don’t specify export, the environment variables won’t be available in your existing Bash environment.

    # .envrc
    export DEBUG=true
    export SECRET_KEY=you need to change this
    

    I’m a fan of direnv because the utility ensures that my environment variables are only set while I am in the same folder or sub-folders that contain the .envrc file. If I move to a different folder location or project, direnv will automatically unload every environment variable that was previously set.

    This has saved me numerous times over the years when I have run a command that might upload a file to s3 and ensure that I’m not uploading to the wrong account because an environment variable is still set from another project.

    Clients are generally understanding, but overriding static media for one client with another client’s files is not a conversation I want to have with any client.

    direnv is excellent insurance against forgetting to unset an environment variable.

    Seeding a .env file

    I prefer to ship an example .env.example file in my projects with reasonable defaults and instructions for copying them over.

    # .env.example
    DEBUG=true
    SECRET_KEY=you need to change this
    

    If you are a casey/just justfile user, I like to ship a just bootstrap recipe that checks if a .env file already exists. If the .env file does not exist, it will copy the example in place.

    My bootstrap recipe typically looks like this:

    # justfile
    bootstrap *ARGS:
        #!/usr/bin/env bash
        set -euo pipefail
    
        if [ ! -f ".env" ]; then
            echo ".env created"
            cp .env.example .env
        fi
    

    How do we keep dotenv files in sync?

    One pain point when working with dotenv files is keeping new environment variables updated when a new variable has been added.

    Thankfully, modenv is an excellent utility that can do precisely this. I run modenv check and will compare the .env* files in the existing folder. It will tell us which files are missing an environment variable when it exists in one but not one of the other files.

    I use modenv check -f to sync up any missing keys with a blank value. This works well to sync up any new environment variables added to our .env.example file with our local .env file.

    Alternatives

    I recently wrote about Using Chamber with Django and managing environment variables, which dives into using Chamber, another tool for managing environment variables.

    If you are working with a team, the 1Password CLI’s op run command is an excellent way to share environment variables securely. The tool is straightforward and can be integrated securely with local workflows and CI with just a few steps.

    Wednesday March 13, 2024
  • Python

    My Python Roots

    Last week, during office hours, I shared the two libraries that were my gateways to learning Python.

    Cog

    I stumbled on Ned Batchelder’s Cog while running an ISP in SWMO in the mid-00s. At the time, I was writing lots of PHP code and had a few layers of ORM code that I could generate with Cog’s help. This code was mainly boilerplate, and Cog was great at templating code. Thankfully, I didn’t need to know Python with Cog to make it work.

    In recent years, I have still used Cog to update docs and to document Justfiles, Click, Typer, and console apps by grabbing the output and embedding it into docs.

    Beautiful Soup

    Beautiful Soup is the library that pushed me to learn Python. Beautiful Soup motivated me to learn Python and even more advanced feats like installing LXML and processing unparseable HTML or XML. I have always liked writing web scrapers and processing HTML documents, which is a weird hobby of mine.

    My first Python app

    My friends and I worked in our first post-college dot com job, and Dell was running an incredible deal on their 20" widescreen monitors over the Christmas holiday.

    Dell ran a daily Dell Elf (Delf) contest where you gave them your email address, and they would give you a discount code for their various products.

    The best code was 50% off of their 20" widescreen displays, which was an incredible deal then. The display retailed for $499, so getting one for $249.50 was great. These codes were random, and the odds were 1 in 25 to get one.

    Using Python and having an email catchall, I wrote my first script to submit a series of email addresses until we found the daily 50% off code. At least four or five of my friends and I stocked up on these monitors that fall, and I have been a fan of Dell displays ever since.

    Today

    I still use Cog and Beautiful Soup 4 in several projects, including a few daily drivers. Last year, during their end-of-year sale, I picked three Dell 27-inch displays, and I still have fond memories of Dell’s displays.

    Monday March 11, 2024
  • Python

    Bootstrap to Tailwind CSS

    I spent a few hours tonight weighing my options to port a few websites from Bootstrap to Tailwind CSS.

    I started with what seems to be the original awssat/tailwindo project is a PHP console app whose goal was to convert any Bootstrap to Tailwind CSS and was last updated three years ago. I couldn’t get it to work from the console or via Docker, so I punted and looked at other options.

    This led me to the node-tailwindo project, which did install successfully for me. node-tailwindo project hadn’t been updated in six years, so much has changed in both projects.

    Since node-tailwindo was installed successfully and seemed to run OK, I ran it on a few projects, including Django Packages, and the results were not terrible. They were not amazing, but things worked.

    I looked at commercial options, and they fall into either Browser Extensions that let you view an existing website with a copy/convert to Tailwind CSS option or tools that rewrite your existing CSS. Neither felt like a good option to me.

    I finally did what any Python developer would and installed BeautifulSoup4. Next, I wrote a script to read all the files in a template folder, and it extracted all the class attributes from the existing HTML. One hundred seventy-six unique classes later, I had my answer.

    Writing my upgrade tool felt like a bigger project that I wanted to take on, but it helped me spot a few issues that node-tailwindo would struggle with.

    This is where BeautifulSoup4 shines, and I could quickly swap out a few classes before I fed them into node-tailwindo, and it fixes several bugs where the project was confused by {% block %} and &#123;&#123; variable }} tags/blocks.

    This might be a project; I slowly update as I get bored since I can probably add and test 10 to 20 tests over lunch. For a brief minute, I debated if this would be my first Rust app. Spoiler: It is not.

    Saturday March 9, 2024
  • Django

    ,

    Python

    How to test with Django, parametrize, and lazy fixtures

    This article is a follow-up to my post on How to test with Django and pytest fixtures.

    Here are some notes on how I prefer to test views for a Django application with authentication using pytest-lazy-fixture.

    Fixtures

    pytest-django has a django_user_model fixture/shortcut, which I recommend using to create valid Django user accounts for your project.

    This example assumes that there are four levels of users. We have anonymous (not authenticated), “user,” staff, and superuser levels of permission to work with. Both staff and superusers follow the Django default pattern and have the is_staff and is_superuser boolean fields set appropriately.

    # users/tests/fixtures.py
    import pytest
    
    
    @pytest.fixture
    def password(db) -> str:
        return "password"
    
    
    @pytest.fixture
    def staff(db, django_user_model, faker, password):
        return django_user_model.objects.create_user(
            email="staff@example.com",
            first_name=faker.first_name(),
            is_staff=True,
            is_superuser=False,
            last_name=faker.last_name(),
            password=password,
        )
    
    
    @pytest.fixture()
    def superuser(db, django_user_model, faker, password):
        return django_user_model.objects.create_user(
            email="superuser@example.com",
            first_name=faker.first_name(),
            is_staff=True,
            is_superuser=True,
            last_name=faker.last_name(),
            password=password,
        )
    
    
    @pytest.fixture()
    def user(db, django_user_model, faker, password):
        return django_user_model.objects.create_user(
            email="user@example.com",
            first_name=faker.first_name(),
            last_name=faker.last_name(),
            password=password,
        )
    
    

    Testing our views with different User roles

    We will assume that our website has some working Category pages that can only viewed by staff or superusers. The lazy_fixture library allows us to pass the name of a fixture using parametrize along with the expected status_code that our view should return.

    If you have never seen parametrize, it is a nice pytest convention that will re-run the same test multiple times while passing a list of parameters into the test to be evaluated.

    The tp function variable is a django-test-plus fixture.

    user, staff, and superuser are fixtures we created above.

    # categories/tests/test_views.py
    import pytest
    
    from pytest import param
    from pytest_lazyfixture import lazy_fixture
    
    
    def test_category_noauth(db, tp):
        """
        GET 'admin/categories/'
        """
        url = tp.reverse("admin:category-list")
    
        # Does this view work with auth?
        response = tp.get(url)
        tp.response_401(response)
    
    
    @pytest.mark.parametrize(
        "testing_user,status_code",
        [
            param(lazy_fixture("user"), 403),
            param(lazy_fixture("staff"), 200),
            param(lazy_fixture("superuser"), 200),
        ],
    )
    def test_category_with_auth(db, tp, testing_user, password, status_code):
        """
        GET 'admin/categories/'
        """
        url = tp.reverse("admin:category-list")
    
        # Does this view work with auth?
        tp.client.login(username=testing_user.email, password=password)
        response = tp.get(url)
        assert response.status_code == status_code
    

    Notes

    Please note: These status codes are more typical for a REST API. So I would adjust any 40x status codes accordingly.

    My goal in sharing these examples is to show that you can get some helpful testing in with a little bit of code, even if the goal isn’t to dive deep and cover everything.

    Updates

    To make my example more consistent, I updated @pytest.mark.django_db() to use a db fixture. Thank you, Ben Lopatin, for the feedback.

    Thursday March 7, 2024
  • Django

    ,

    Python

    Importing data with Django Ninja's ModelSchema

    I have recently been playing with Django Ninja for small APIs and for leveraging Schema. Specifically, ModelSchema is worth checking out because it’s a hidden gem for working with Django models, even if you aren’t interested in building a Rest API.

    Schemas are very useful to define your validation rules and responses, but sometimes you need to reflect your database models into schemas and keep changes in sync. https://django-ninja.dev/guides/response/django-pydantic/

    One challenge we face is importing data from one legacy database into a new database with a different structure. While we can map old fields to new fields using a Python dictionary, we also need more control over what the data looks like coming back out.

    Thankfully, ModelSchema is built on top of Pydantic’s BaseModel and supports Pydantic’s Field alias feature.

    This allows us to create a ModelSchema based on a LegacyCategory model, and we can build out Field(alias="...") types to change the shape of how the data is returned.

    We can then store the result as a Python dictionary and insert it into our new model. We can also log a JSON representation of the instance to make debugging easier. See Serializing Outside of Views for an overview of how the from_orm API works.

    To test this, I built a proof of concept Django management command using django-click, which loops through all our legacy category models and prints them.

    # management/commands/demo_model_schema.py
    import djclick as click
    
    from ninja import ModelSchema
    from pydantic import Field
    
    from legacy.models import LegacyCategory
    from future.models import Category
    
    
    class LegacyCategorySchema(ModelSchema):
        name: str = Field(alias="cat_name")
        description: str = Field(alias="cat_description")
        active: bool = Field(alias="cat_is_active")
    
        class Meta:
            fields = ["id"]
            model = Category
    
    
    @click.command()
    def main():
        categories = LegacyCategory.objects.all()
        for category in categories:
            data = LegacyCategorySchema.from_orm(category).dict()
            print(data)
            # save to a database or do something useful here
    

    More resources

    If you are curious about what Django Ninja is about, I recommend starting with their CRUD example: Final Code, and working backward. This will give you a good idea of what a finished CRUD Rest API looks like with Django Ninja.

    Wednesday March 6, 2024
  • Python

    Upgrading Python from 3.11 to 3.12 notes

    Recently, I have been slowly moving several of my side projects and client projects from various Python versions to Python 3.12.

    I never see people write about this, so it might be nice to write and share some notes.

    Where to start

    The first thing we do with a relatively simple upgrade is figure out what Python version we use. Thankfully, the project we picked mentioned in the README.md that it was using Python 3.11.

    Once we know which version of Python we are using, we can open up iTerm and get a git checkout of the project.

    Next, we will run git grep 11, where “11” is the shortened form of the Python version that we are running. There are so many variations of 3.11 and 311 that using the minor version tends to be about right.

    $ git grep 11
    ... really long list...
    .github/workflows/actions.yml:      - name: Set up Python 3.11
    .github/workflows/actions.yml:          python-version: '3.11'
    .github/workflows/actions.yml:      - name: Set up Python 3.11
    .github/workflows/actions.yml:          python-version: '3.11'
    .pre-commit-config.yaml:  python: python311
    .pre-commit-config.yaml:        args: [--py311-plus]
    README.md:This project will use Python 3.11, Docker, and Docker Compose.
    README.md:Make a Python 3.11.x virtualenv.
    docker/Dockerfile:FROM python:3.11-slim as builder-py
    docker/Dockerfile:FROM python:3.11-slim AS release
    pyproject.toml:requires-python = ">= 3.11"
    pyproject.toml:# Assume Python >=3.11.
    pyproject.toml:target-version = "py311"
    requirements.txt:# This file is autogenerated by pip-compile with Python 3.11
    ... lots and lots of files...
    

    This output will give us a long list of files. Usually, this is 100s or 1000s of files we will pipe or copy into our code editor. We will make a few passes to remove all of the CSS, SVG, and HTML files in the list, and that pairs down the results to half a dozen or a dozen files.

    Create a new git branch

    Next, we will create a new git branch called upgrade-to-python-3.12, and we will open each file one by one, and replace every “3.11” and “311” reference with “3.12” and “312” respectively.

    Lint/format our code base

    Once we have all of our files updated, we will commit everything. Then we will note special files like .pre-commit-config.yaml and pyproject.toml, impacting how my Python files are linted and formatted. Then, we will run pre-commit immediately after and commit any formatting changes.

    Rebuild our Docker image

    Since this project contains docker/Dockerfile that tells us the project uses Docker, we will need to rebuild our container image and note anything that breaks.

    Re-pin/freeze our Python dependencies

    Next, we will run pip-tools compile from within our newly rebuilt Docker container to build a new requirements.txt using Python 3.12.

    Re-rebuild our Docker image (again)

    “Insert Xzibit Yo Dawg meme." Next, we rebuild our Docker image using the newly pinned requirements.txt file, and this should be our final image.

    Did our tests pass?

    Assuming Docker builds cleanly, we will run my test suite using pytest. Once our tests pass, we’ll commit any uncommitted changes, git push our branch to GitHub, and open a Pull Request for review.

    Did our tests pass in CI?

    If our tests pass on GitHub Actions in CI, then we know our upgrade was successful, and we are reasonably confident that

    When things don’t “just work.”

    If you keep up with your upgrades, most of the time, everything works. Half a dozen projects did work for me, but I had one that did not work on Monday. There was a sub-dependency issue, so I closed my branch and opened a new issue to revisit this upgrade once the next version of Python 3.12.3 is released.

    Even though this wasn’t a Python 3.12.2 bug, it takes the Python ecosystem time to catch up with newer versions. Since Python 3.11 is still supported for another 3 years and 7 months (as of this writing), it won’t hurt to wait a few weeks or months and revisit these changes.

    If you are curious about how I decide when to adopt a new version, I wrote about that last month: Choosing the Right Python and Django Versions for Your Projects

    Tuesday March 5, 2024
  • Python

    On pip isolation

    I saw this post by Trey Hunner about pip isolation, and I wanted to share a third method.

    I’ve just updated my ~/.config/pip/pip.conf & my dotfiles repo to disallow pip installing outside virtual environments! 🎉

    TIL 2 things about #Python’s pip:

    1. pip has a config file. If I ever knew this, I’d forgotten.

    2. pip has an option that stops it from working outside of a virtual environment!

    https://mastodon.social/@treyhunner/112032637878747686

    To Trey’s point, I never pip to install to easily install anything globally. If I want something installed globally, I can jump through a few hoops to avoid polluting my global pip cache.

    My preferred way of disallowing pip installation outside virtual environments is to use the PIP_REQUIRE_VIRTUALENV environment variable.

    I have export PIP_REQUIRE_VIRTUALENV=true set in my .bash_profile, which is part of my dotfiles. I prefer the ENV approach because I share my files over many computers, and it’s one less file to keep up with.

    When I want to pip install something globally, I use pipx, which installs each Python application into its isolated environment.

    For the few times that I do need to install a Python application globally, I use:

    PIP_REQUIRE_VIRTUALENV=false python -m pip install \
        --upgrade \
        pip \
        pipx
    

    I have this recipe baked into my global justfile so I can quickly apply upgrades.

    Monday March 4, 2024
  • Django

    ,

    Python

    New `django-startproject` update

    I updated my django-startproject project today to support the latest versions of Django, Python, Compose, and other tools I’m a fan of. I use django-startproject to spin up projects that need some batteries quickly, but not every battery.

    Features:

    • Django 5.0
    • Python 3.12
    • Docker Compose 3
    • Adds casey/just recipes/workflows (Just is a command runner, not a build tool)
    • Adds uv support

    uv is the newest addition, which is a Python package installer and pip-tools replacement. It’s not a 100% drop-in replacement for pip and pip-tools, but it cuts my build times in half, and I have yet to hit any significant show-stoppers.

    Saturday March 2, 2024
  • Python

    Python's UV tool is actually pretty good

    I carved out some time recently to start playing with the new Python package installer and resolver, uv.

    uv makes big promises and claims to be 10-100x faster than pip and pip-tools. From my experiments over the last few weeks, it lives up to this promise.

    I’m using it locally for my virtual environments, in my Dockerfiles to rebuild my containers, and for CI using GitHub Actions. Across the board, anything I do with pip or pip-tools is remarkably faster.

    My average GitHub Actions CI workflows dropped from ~2 minutes to 50 seconds. This cuts the minutes I use in half and, in theory, my monthly bill in half.

    My goal in sharing my configs is more “show” than “tell' because I will copy and paste these for weeks and months to come.

    local development

    Every one of my projects has a justfile (it’s like Make but works the same everywhere) with “bootstrap” and “lock” recipes. My “bootstrap” recipe installs everything I need to work with the project locally. I use my “lock” recipe to lock my requirements.txt file so that I’m using the exact requirements locally and in production.

    justfile before

    My justfile might look like this:

    @bootstrap
        python -m pip install --upgrade pip
        python -m pip install --upgrade --requirement requirements.in
        
    @lock *ARGS:
        python -m piptools compile {{ ARGS }} ./requirements.in \
            --resolver=backtracking \
            --output-file ./requirements.txt
    

    justfile after

    For the most part, uv shares most of the same syntax as pip so you can start by changing your pip references to uv pip:

    @bootstrap
        python -m pip install --upgrade pip uv
        python -m uv pip install --upgrade --requirement requirements.in
        
    @lock *ARGS:
        python -m uv pip compile {{ ARGS }} ./requirements.in \
            --resolver=backtracking \
            --output-file ./requirements.txt
    

    Dockerfiles

    Everyone’s container setup is going to be different, but I use Docker and Orbstack, which use a Dockerfile.

    Dockerfile before

    FROM python:3.12-slim-bookworm
    
    ENV PIP_DISABLE_PIP_VERSION_CHECK 1
    ENV PYTHONDONTWRITEBYTECODE 1
    ENV PYTHONPATH /srv
    ENV PYTHONUNBUFFERED 1
    
    RUN apt-get update
    
    RUN pip install --upgrade pip
    
    COPY requirements.txt /src/requirements.txt
    
    RUN pip install --requirement /src/requirements.txt
    
    WORKDIR /src/
    

    Dockerfile after

    FROM python:3.12-slim-bookworm
    
    ENV PATH /venv/bin:$PATH. # this is new
    ENV PIP_DISABLE_PIP_VERSION_CHECK 1
    ENV PYTHONDONTWRITEBYTECODE 1
    ENV PYTHONPATH /srv
    ENV PYTHONUNBUFFERED 1
    
    RUN apt-get update
    
    RUN pip install --upgrade pip uv  # this is updated
    
    RUN python -m uv venv /venv  # this is new
    
    COPY requirements.txt /src/requirements.txt
    
    RUN uv pip install --requirement /src/requirements.txt  # this is updated
    
    WORKDIR /src/
    

    GitHub Actions

    GitHub Actions are a little harder to explain, but my workflows started off similar to this before I made the switch to uv:

    main.yml before

    
      - name: Set up Python 3.12
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'
    
      - name: Install dependencies
        run: |
                python -m pip install --requirement requirements.in
    
      - name: Collect Static Assets
        run: |
                python -m manage collectstatic --noinput
    

    main.yml after

    The biggest pain point that I ran into along the way was related to GitHub Issue #1386, which has a useable workaround.

    
      - name: Set up Python 3.12
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'
    
      - name: Install dependencies
        run: |
          python -m pip install uv
          python -m uv venv .venv
          echo "VIRTUAL_ENV=.venv" >> $GITHUB_ENV
          echo "$PWD/.venv/bin" >> $GITHUB_PATH
          python -m uv pip install --requirement requirements.in      
    
      - name: Collect Static Assets
        run: |
          . .venv/bin/activate
          python -m manage collectstatic --noinput      
    

    Conclusion

    I hope this was helpful to anyone who is considering making the switch to uv. I love to hear about how much time it saves you.

    Updates

    2024-03-08 - I modified the ENV PATH statement to prepend instead of replacing the value.

    Thursday February 29, 2024
  • Django

    ,

    Python

    Using Django Q2

    I’m long overdue to write about how Django Q2 has become part of my development toolkit. As the maintained successor to Django Q, Django Q2 extends Django to handle background tasks and scheduled jobs.

    Django Q2 is flexible in managing tasks, whether sending out daily emails or performing hourly tasks like checking RSS feeds. The project works seamlessly with Django, making it one of the more straightforward background task solutions to integrate into your projects.

    Using Django Q2 involves passing a method or a string reference to a method to an async_task() function, which will run in the background.

    One feature of Django Q2 that particularly impresses me is its adaptability to various databases. Whether your project uses the default Django database or something more scalable like Redis, Django Q2 fits perfectly. This flexibility means that a database queue suffices without any hiccups for most of my projects, even those that are small to medium.

    Unlike other task queues that require managing multiple processes or services, Django Q2 keeps it simple. The only necessity is to have the qcluster management command running, which is a breeze compared to other task queues because you only need to run one service to handle everything.

    Django Q2’s flexibility, ease of use, and seamless integration with Django make it an excellent tool to reach for when you need background tasks.

    Tuesday February 27, 2024