Shell redirection syntax soup

· 4 min

I always struggle with the syntax for redirecting multiple streams to another command or a file. LLMs do help, but beyond the most obvious cases, it takes a few prompts to get the syntax right. When I know exactly what I’m after, scanning a quick post is much faster than wrestling with a non-deterministic kraken. So, here’s a list of the redirection and piping syntax I use the most, with real examples.

HTTP requests via /dev/tcp

· 3 min

I learned this neat Bash trick today where you can make a raw HTTP request using the /dev/tcp file descriptor without using tools like curl or wget. This came in handy while writing a health check script that needed to make a TCP request to a service.

The following script opens a TCP connection and makes a simple GET request to example.com:

#!/bin/bash

# Open TCP connection to example.com:80 and assign file descriptor 3
# exec keeps /dev/fd/3 open; 3<> enables bidirectional read-write
exec 3<>/dev/tcp/example.com/80

# Send the HTTP GET request to the server (>& redirects to /dev/fd/3)
echo -e \
    "GET / HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n" >&3

# Read and print the server's response
# <& redirects the output of /dev/fd/3 to cat
cat <&3

# Close the file descriptor, terminating the TCP connection
exec 3>&-

Running this will print the response from the site to your console.

The *nix install command

· 2 min

TIL about the install command on *nix systems. A quick GitHub search for the term brought up a ton of matches. I’m surprised I just found out about it now.

Often, in shell scripts I need to:

  • Create a directory hierarchy
  • Copy a config or binary file to the new directory
  • Set permissions on the file

It usually looks like this:

# Create directory hierarchy (-p creates parent directories)
mkdir -p ~/.config/app

# Copy current config to the newly created directory
cp conf ~/.config/app/conf

# Set the file permission
chmod 755 ~/.config/app/conf

Turns out, the install command in GNU coreutils can do all that in one line:

Here-doc headache

· 3 min

I was working on the deployment pipeline for a service that launches an app in a dedicated VM using GitHub Actions. In the last step of the workflow, the CI SSHs into the VM and runs several commands using a here document in bash. The simplified version looks like this:

# SSH into the remote machine and run commands to deploy the service
ssh $SSH_USER@$SSH_HOST <<EOF
    # Go to the work directory
    cd $WORK_DIR

    # Make a git pull
    git pull

    # Export environment variables required for the service to run
    export AUTH_TOKEN=$APP_AUTH_TOKEN

    # Start the service
    docker compose up -d --build
EOF

The fully working version can be found in the serve-init repo with here-doc.

The sane pull request

· 3 min

One of the reasons why I’m a big advocate of rebasing and cleaning up feature branches, even when the changes get squash-merged to the mainline, is that it makes the PR reviewer’s life a little easier. I’ve written about my rebasing workflow before and learned a few new things from the Hacker News discussion around it.

While there’s been no shortage of text on why and how to craft atomic commits, I often find those discussions focus too much on VCS hygiene, and the main benefit gets lost in the minutiae. When working in a team setup, I’ve discovered that individual commits matter much less than the final change list.

I kind of like rebasing

· 8 min

People tend to get pretty passionate about Git workflows on different online forums. Some like to rebase, while others prefer to keep the disorganized records. Some dislike the extra merge commit, while others love to preserve all the historical artifacts. There’s merit to both sides of the discussion. That being said, I kind of like rebasing because I’m a messy committer who:

  • Usually doesn’t care for keeping atomic commits.
  • Creates a lot of short commits with messages like “fix” or “wip”.
  • Likes to clean up the untidy commits before sending the branch for peer review.
  • Prefers a linear history over a forked one so that git log --oneline --graph tells a nice story.

Git rebase allows me to squash my disordered commits into a neat little one, which bundles all the changes with passing tests and documentation. Sure, a similar result can be emulated using git merge --squash feat_branch or GitHub’s squash-merge feature, but to me, rebasing feels cleaner. Plus, over time, I’ve subconsciously picked up the tricks to work my way around rebase-related gotchas.

Protobuffed contracts

· 4 min

People typically associate Google’s Protocol Buffer with gRPC services, and rightfully so. But things often get confusing when discussing protobufs because the term can mean different things:

  • A binary protocol for efficiently serializing structured data.
  • A language used to specify how this data should be structured.

In gRPC services, you usually use both: the protobuf language in proto files defines the service interface, and then the clients use the same proto files to communicate with the services.

ETag and HTTP caching

· 7 min

One neat use case for the HTTP ETag header is client-side HTTP caching for GET requests. Along with the ETag header, the caching workflow requires you to fiddle with other conditional HTTP headers like If-Match or If-None-Match. However, their interaction can feel a bit confusing at times.

Every time I need to tackle this, I end up spending some time browsing through the relevant MDN docs on ETag, If-Match, and If-None-Match to jog my memory. At this point, I’ve done it enough times to justify spending the time to write this.

Crossing the CORS crossroad

· 5 min

Every once in a while, I find myself skimming through the MDN docs to jog my memory on how CORS works and which HTTP headers are associated with it. This is particularly true when a frontend app can’t talk to a backend service I manage due to a CORS error.

MDN’s CORS documentation is excellent but can be a bit verbose for someone just looking for a way to quickly troubleshoot and fix the issue at hand.

Eschewing black box API calls

· 6 min

I love dynamically typed languages as much as the next person. They let us make ergonomic API calls like this:

import httpx

# Sync call for simplicity
r = httpx.get("https://dummyjson.com/products/1").json()
print(r["id"], r["title"], r["description"])

or this:

fetch("https://dummyjson.com/products/1")
  .then((res) => res.json())
  .then((json) => console.log(json.id, json.type, json.description));

In both cases, running the snippets will return:

1 'iPhone 9' 'An apple mobile which is nothing like apple'

Unless you’ve worked with a statically typed language that enforces more constraints, it’s hard to appreciate how incredibly convenient it is to be able to call and use an API endpoint without having to deal with types or knowing anything about its payload structure. You can treat the API response as a black box and deal with everything in runtime.