September 16

The many meanings of event-driven architecture — Martin Fowler

In a 2017 talk, Martin Fowler untangled a few concepts for me that often get lumped together under the event-driven umbrella. He breaks event-driven systems into four main types:

Event notification: A system sends out a signal (event) when something happens but with minimal details. Other systems receive the notification and must request more information if needed. This keeps things simple and decoupled but makes tracking harder since the event doesn’t include full data.

Event-carried state transfer: Events carry all the necessary data upfront, so no extra requests are needed. This simplifies interactions but can make events bulky and harder to manage as the system scales.

Event sourcing: Instead of storing just the current state, the system logs every event that occurs. This allows you to reconstruct the state at any time. It’s great for auditing and troubleshooting but adds complexity as log data grows.

CQRS: Commands (write operations) and queries (read operations) are handled separately, letting each be optimized on its own. It works well for complex domains but introduces more architectural overhead and needs careful planning.

Interestingly, I’ve been using the second one without knowing what it was called.

September 15

Founder Mode, hackers, and being bored by tech — Ian Betteridge

On a micro scale, I think, there’s still a lot to be excited about. But on the macro level, this VC-Founder monoculture has been stealing the thunder from what really matters—the great technology that should have been a testament to the hive mind’s ingenuity. Instead, all the attention is on the process itself.

Tech has become all Jobs and no Woz. As Dave Karpf rightly identifies, the hacker has vanished from the scene, to be replaced by an endless array of know-nothing hero founders whose main superpower is the ability to bully subordinates (and half of Twitter) into believing they are always right.

September 14

Simon Willison on the Software Misadventures podcast

I spent a delightful 2 hours this morning listening to Simon Willison talk about his creative process and how LLMs have evolved his approach.

He shared insights into how he’s become more efficient with time, writing consistently on his blog, inspired by things like Duolingo’s streak and Tom Scott’s weekly video run for a decade. Another thing I found fascinating is how he uses GitHub Issues to record every little detail of a project he’s working on. This helps him manage so many projects at once without burning out. Simon even pulled together a summary from the podcast transcript that captured some of the best bits of the discussion.

About 5 years ago, one of Simon’s tweets inspired me to start publishing my thoughts and learnings, no matter how trivial they may seem. My career has benefited immensely from that. The process of reifying your ideas and learning on paper seems daunting at first, but it gets easier over time.


September 09

Canonical log lines — Stripe Engineering Blog

I’ve been practicing this for a while but didn’t know what to call it. Canonical log lines are arbitrarily wide structured log messages that get fired off at the end of a unit of work. In a web app, you could emit a special log line tagged with different IDs and attributes at the end of every request. The benefit is that when debugging, these are the logs you’ll check first. Sifting through fewer messages and correlating them with other logs makes investigations much more effective, and the structured nature of these logs allows for easier filtering and automated analysis.

Out of all the tools and techniques we deploy to help get insight into production, canonical log lines in particular have proven to be so useful for added operational visibility and incident response that we’ve put them in almost every service we run—not only are they used in our main API, but there’s one emitted every time a webhook is sent, a credit card is tokenized by our PCI vault, or a page is loaded in the Stripe Dashboard.


September 07

Recognizing the Gell-Mann Amnesia effect in my use of LLM tools

It took time for me to recognize the Gell-Mann Amnesia effect shaping how I use LLM tools in my work. When dealing with unfamiliar tech, I’m quick to accept suggestions verbatim, but in a domain I know, the patches rarely impress and often get torn to shreds.


September 04

On the importance of ablation studies in deep learning research — François Chollet

This is true for almost any engineering effort. It’s always a good idea to ask if the design can be simplified without losing usability. Now I know there’s a name for this practice: ablation study.

The goal of research shouldn’t be merely to publish, but to generate reliable knowledge. Crucially, understanding causality in your system is the most straightforward way to generate reliable knowledge. And there’s a very low-effort way to look into causality: ablation studies. Ablation studies consist of systematically trying to remove parts of a system—making it simpler—to identify where its performance actually comes from. If you find that X + Y + Z gives you good results, also try X, Y, Z, X + Y, X + Z, and Y + Z, and see what happens.

If you become a deep learning researcher, cut through the noise in the research process: do ablation studies for your models. Always ask, “Could there be a simpler explanation? Is this added complexity really necessary? Why?


September 01

Why A.I. Isn’t Going to Make Art — Ted Chiang, The New Yorker

I indiscriminately devour almost everything Ted Chiang puts out, and this piece is no exception. It’s one of the most articulate arguments I’ve read on the sentimental value of human-generated artifacts, even when AI can make perfect knockoffs.

I’m pro-LLMs and use them to aid my work all the time. While they’re incredibly useful for a certain genre of tasks, buying into the Silicon Valley idea that these are soon going to replace every type of human-generated content is incredibly naive and redolent of the hubris within the tech bubble.

Art is notoriously hard to define, and so are the differences between good art and bad art. But let me offer a generalization: art is something that results from making a lot of choices. This might be easiest to explain if we use fiction writing as an example. When you are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices.

Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium. But the creators of traditional novels, paintings, and films are drawn to those art forms because they see the unique expressive potential that each medium affords. It is their eagerness to take full advantage of those potentialities that makes their work satisfying, whether as entertainment or as art.

Any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it. Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it.

Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty.


August 31

How to Be a Better Reader — Tina Jordan, The NY Times

To read more deeply, to do the kind of reading that stimulates your imagination, the single most important thing to do is take your time. You can’t read deeply if you’re skimming. As the writer Zadie Smith has said, “When you practice reading, and you work at a text, it can only give you what you put into it.”

At a time when most of us read in superficial, bite-size chunks that prize quickness — texts, tweets, emails — it can be difficult to retrain your brain to read at an unhurried pace, but it is essential. In “Slow Reading in a Hurried Age,” David Mikics writes that “slow reading changes your mind the way exercise changes your body: A whole new world will open up, you will feel and act differently, because books will be more open and alive to you.”


August 26

Dark Matter — Blake Crouch

I just finished the book. It’s an emotional rollercoaster of a story, stemming from a MacGuffin that enables quantum superposition in the macro world, bringing the Copenhagen interpretation of quantum mechanics to life.

While the book starts off with a bang, it becomes a bit more predictable as the story progresses. I still enjoyed how well the author reified the probable dilemma that having access to the multiverse might pose. Highly recommened. I’m already beyond excited to read his next book, Recursion.