Injecting Pytest fixtures without cluttering test signatures

· 2 min

Sometimes, when writing tests in Pytest, I find myself using fixtures that the test function/method doesn’t directly reference. Instead, Pytest runs the fixture, and the test function implicitly leverages its side effects. For example:

import os
from collections.abc import Iterator
from unittest.mock import Mock, patch
import pytest


# Define an implicit environment mock fixture that patches os.environ
@pytest.fixture
def mock_env() -> Iterator[None]:
    with patch.dict("os.environ", {"IMPLICIT_KEY": "IMPLICIT_VALUE"}):
        yield


# Define an explicit service mock fixture
@pytest.fixture
def mock_svc() -> Mock:
    service = Mock()
    service.process.return_value = "Explicit Mocked Response"
    return service


# IDEs tend to dim out unused parameters like mock_env
def test_stuff(mock_svc: Mock, mock_env: Mock) -> None:
    # Use the explicit mock
    response = mock_svc.process()
    assert response == "Explicit Mocked Response"
    mock_svc.process.assert_called_once()

    # Assert the environment variable patched by mock_env
    assert os.environ["IMPLICIT_KEY"] == "IMPLICIT_VALUE"

In the test_stuff function above, we directly use the mock_svc fixture but not mock_env. Instead, we expect Pytest to run mock_env, which modifies the environment variables. This works, but IDEs often mark mock_env as an unused parameter and dims it out.

Explicit method overriding with @typing.override

· 2 min

Although I’ve been using Python 3.12 in production for nearly a year, one neat feature in the typing module that escaped me was the @override decorator. Proposed in PEP 698, it’s been hanging out in typing_extensions for a while. This is one of those small features you either don’t care about or get totally psyched over. I’m definitely in the latter camp.

In languages like C#, Java, and Kotlin, explicit overriding is required. For instance, in Java, you use @Override to make it clear you’re overriding a method in a sub class. If you mess up the method name or if the method doesn’t exist in the superclass, the compiler throws an error. Now, with Python’s @override decorator, we get similar benefits - though only if you’re using a static type checker.

Quicker startup with module-level __getattr__

· 4 min

This morning, someone on Twitter pointed me to PEP 562, which introduces __getattr__ and __dir__ at the module level. While __dir__ helps control which attributes are printed when calling dir(module), __getattr__ is the more interesting addition.

The __getattr__ method in a module works similarly to how it does in a Python class. For example:

class Cat:
    def __getattr__(self, name: str) -> str:
        if name == "voice":
            return "meow!!"
        raise AttributeError(f"Attribute {name} does not exist")


# Try to access 'voice' on Cat
cat = Cat()
cat.voice  # Prints "meow!!"

# Raises AttributeError: Attribute something_else does not exist
cat.something_else

In this class, __getattr__ defines what happens when specific attributes are accessed, allowing you to manage how missing attributes behave. Since Python 3.7, you can also define __getattr__ at the module level to handle attribute access on the module itself.

Shades of testing HTTP requests in Python

· 5 min

Here’s a Python snippet that makes an HTTP POST request:

# script.py

import httpx
from typing import Any


async def make_request(url: str) -> dict[str, Any]:
    headers = {"Content-Type": "application/json"}

    async with httpx.AsyncClient(headers=headers) as client:
        response = await client.post(
            url,
            json={"key_1": "value_1", "key_2": "value_2"},
        )
        return response.json()

The function make_request makes an async HTTP request with the HTTPx library. Running this with asyncio.run(make_request("https://httpbin.org/post")) gives us the following output:

{
  "args": {},
  "data": "{\"key_1\": \"value_1\", \"key_2\": \"value_2\"}",
  "files": {},
  "form": {},
  "headers": {
    "Accept": "*/*",
    "Accept-Encoding": "gzip, deflate",
    "Content-Length": "40",
    "Content-Type": "application/json",
    "Host": "httpbin.org",
    "User-Agent": "python-httpx/0.27.2",
    "X-Amzn-Trace-Id": "Root=1-66d5f7b0-2ed0ddc57241f0960f28bc91"
  },
  "json": {
    "key_1": "value_1",
    "key_2": "value_2"
  },
  "origin": "95.90.238.240",
  "url": "https://httpbin.org/post"
}

We’re only interested in the json field and want to assert in our test that making the HTTP call returns the expected values.

Taming parametrize with pytest.param

· 4 min

I love pytest.mark.parametrize - so much so that I sometimes shoehorn my tests to fit into it. But the default style of writing tests with parametrize can quickly turn into an unreadable mess as the test complexity grows. For example:

import pytest
from math import atan2


def polarify(x: float, y: float) -> tuple[float, float]:
    r = (x**2 + y**2) ** 0.5
    theta = atan2(y, x)
    return r, theta


@pytest.mark.parametrize(
    "x, y, expected",
    [
        (0, 0, (0, 0)),
        (1, 0, (1, 0)),
        (0, 1, (1, 1.5707963267948966)),
        (1, 1, (2**0.5, 0.7853981633974483)),
        (-1, -1, (2**0.5, -2.356194490192345)),
    ],
)
def test_polarify(x: float, y: float, expected: tuple[float, float]) -> None:
    # pytest.approx helps us ignore floating point discrepancies
    assert polarify(x, y) == pytest.approx(expected)

The polarify function converts Cartesian coordinates to polar coordinates. We’re using @pytest.mark.parametrize in its standard form to test different conditions.

Log context propagation in Python ASGI apps

· 7 min

Let’s say you have a web app that emits log messages from different layers. Your log shipper collects and sends these messages to a destination like Datadog where you can query them. One common requirement is to tag the log messages with some common attributes, which you can use later to query them.

In distributed tracing, this tagging is usually known as context propagation, where you’re attaching some contextual information to your log messages that you can use later for query purposes. However, if you have to collect the context at each layer of your application and pass it manually to the downstream ones, that’d make the whole process quite painful.

Please don't hijack my Python root logger

· 5 min

With the recent explosion of LLM tools, I often like to kill time fiddling with different LLM client libraries and SDKs in one-off scripts. Lately, I’ve noticed that some newer tools frequently mess up the logger settings, meddling with my application logs. While it’s less common in more seasoned libraries, I guess it’s worth rehashing why hijacking the root logger isn’t a good idea when writing libraries or other forms of reusable code.

TypeIs does what I thought TypeGuard would do in Python

· 4 min

The handful of times I’ve reached for typing.TypeGuard in Python, I’ve always been confused by its behavior and ended up ditching it with a # type: ignore comment.

For the uninitiated, TypeGuard allows you to apply custom type narrowing. For example, let’s say you have a function named pretty_print that accepts a few different types and prints them differently onto the console:

from typing import assert_never


def pretty_print(val: int | float | str) -> None:
    if isinstance(val, int):  # assert_type(val, int)
        print(f"Integer: {val}")

    elif isinstance(val, float):  # assert_type(val, float)
        print(f"Float: {val}")

    elif isinstance(val, str):  # assert_type(val, str)
        print(f"String: {val}")

    else:
        assert_never(val)

If you run it through mypy, in each branch, the type checker automatically narrows the type and knows exactly what the type of val is. You can test the narrowed type in each branch with the typing.assert_type function.

Patching pydantic settings in pytest

· 5 min

I’ve been a happy user of pydantic settings to manage all my app configurations since the 1.0 era. When pydantic 2.0 was released, the settings portion became a separate package called pydantic_settings.

It does two things that I love: it automatically reads the environment variables from the .env file and allows you to declaratively convert the string values to their desired types like integers, booleans, etc.

Plus, it lets you override the variables defined in .env by exporting them in your shell.

Annotating args and kwargs in Python

· 3 min

While I tend to avoid *args and **kwargs in my function signatures, it’s not always possible to do so without hurting API ergonomics. Especially when you need to write functions that call other helper functions with the same signature.

Typing *args and **kwargs has always been a pain since you couldn’t annotate them precisely before. For example, if all the positional and keyword arguments of a function had the same type, you could do this: