6 min read

Writing (code) is thinking and other thoughts

Writing (code) is thinking and other thoughts
Photo by Robert Ruggiero / Unsplash

I had many things planned for Christmas break, but I spent most of my time learning Rust. I've wanted to learn Rust for a long time but never really put serious effort into it.

But now I have an idea that requires the use of Apache DataFusion so I started to build it with Claude Code. Very soon, I realised that I couldn't review the code and basically had no clue if it was good or bad code. I can't build like this, so I decided to get my head down and learn Rust.😄

I have a book tracker application written in Go that I use quite regularly. And it's a very simple REST API over sqlite. I decided to rewrite it in Rust as a way to learn the language.

I wrote every single line myself, with very little copy and paste. I used Claude code a lot, but only to ask questions. I asked it to explain compiler errors, or patterns, and used it to brainstorm but it never wrote any of the code. This experience was revelatory after over 3 months of primarily agentic coding.

And then I got back to work and have been relying heavily on Claude code for everything.

Writing (code) is thinking

Writing is thinking and similarly writing code is thinking. I was considering, questioning and weighing a million things that I wouldn't have with agentic coding. You start considering trade-offs, have a better understanding of the complexity and tech-debt you're accruing in real-time. I feel like, long-term, you will produce a more coherent, more debuggable code base writing code by hand.

It also helped me get a feel for Rust patterns, and understand why things were a certain way.

I saw chains similar to the following before, but I didn't really grasp the gradual unravelling until I had to write it all by hand:

let gb = app
      .google
      .get_book(isbn)
      .await                      // Result<Option<Book>, Error>
      .map_err(|e| (...))?        // Error? → return. Ok? → Option<Book>
      .unwrap_or_default();       // None → default. Some → Book

Initially I felt handicapped because how slow I was, but then I slowly came to enjoy the process. I made a resolution to write code by hand without AI every 6 months. It is a real "touch the soil beneath you" moment for me lol.

AI democratises "Googling" as a skill

I used to half-joke that writing software is mostly about knowing what to Google. Debugging obscure errors, looking for existing implementations all require you to google with the right keywords and this is a skill that takes a while to hone.

But right now I am just chucking things over to Claude and Gemini and they are able to give solid answers. Ofcourse I have to verify all of it, but they are pretty spot on most of the time.

A couple of recent prompts I used:


Here is a problem I need to implement, we have a browser SDK that is sending us telemetry like web vitals and performance, for each page. It sends raw URLs along with the telemetry.


This means we have an incoming stream of URLs, and and we need to collapse them into set of URL patterns so we can effectively show the performance. For example, if we have:

  ```
  /books/harry-potter
  /books/the-expanse
  /books/earth-sea
  ....
  ```

And we need to collapse it into: `/books/*`. Similarly for other patterns that might exist.


And ideally we have an upper limit (for ex. 1000) for the patterns, and if we have more than those patterns, we can collapse them down further.

What algorithms or mechanisms can we use to achieve this?
Can you explain to me what the DRAIN logparsing algorithm is and how it works?

I can ask a lot of follow up questions and that is AWESOME and helps me jump into specific sections that I don't fully understand. This feels a lot better than having to read 15 different Medium articles and YouTube videos.

AI is a rubber duck that talks back

When dealing with a codebase, I always have a claude code session open and whenever I get stuck or confused, I use Claude as a rubber duck. And the explanations are almost always pretty spot on.

A recent question I asked because I couldn't really figure it out quickly (PR):

In @pkg/trie/trie.go, we usually call getOrCreateWildcard with depth, but in hardCollapseNode we call it with depth+1, why?

Reviews are surprisingly good

In my personal projects, I get code-reviews from both Claude and CoPilot, and I find them quite useful. Ofcourse about 80% of the suggestions I ignore, but they helped me catch multiple logic bugs. I am about to slowly enable them on more and more work repositories.

The low hit-rate might sound like spam, but honestly it isn't. If anything it reminds I cannot just rely on AI reviews 😄

I need to understand what I ship

My core workflow of using Claude Code hasn't changed. I build in small, reviewable increments. And now I am coming across things like Ralph Wiggum loops, that basically give the agent a big PRD and ask it to implement and come back to a fully finished and working feature, and a PR that looks like this:

My first questions are, who will review this? Do you fully understand it?
But then it feels like the answer is: do we need to carefully review this? it works!

I am confused and conflicted, but I do know this: Attaching my name to a piece of code means I can vouch for it, and I cannot blindly vouch for AI generated code.

Even for my personal projects, I know that in 6 months I have to come back to the project and I'd rather have a codebase I can understand. And this is more acute for projects that have others working on them.

Debt accrual is real

This has always been true for software engineering, but it is more true for AI generated code. As we generate more and more code, carefully reviewing code and ensuring quality becomes hard. At some point, fatigue sets in, and we start lowering the standards. I think that's OK, as long as you regularly keep an eye on the quality the tech debt and pay it back.

If this is missing, then you'll very quickly end up with a mess that is going to get harder and harder to get back from.

Code is clay?

If AI gets good enough that we can build entire projects with it, what happens to software engineers? What happens to me? This is a question I have been grappling with for a while, and this article phrased it very well: Code is Clay.

When the industrial revolution came for pottery, factories started pumping out ceramics. Plates got cheap. Mugs became disposable. You'd think clay would have disappeared. Why bother with the slow, messy, manual process when machines could do it faster?
But clay didn't go away. Ceramics studios are everywhere now. People pay good money to throw pots on weekends. Kerri and I are proof. The craft got more valuable once it wasn't necessary anymore. When you don't have to make something by hand, choosing to makes it mean something.

I also enjoy working on new, novel and hard problems. Like building a high-performance distributed database. I don't think AI can fully automate that away anytime soon. If anything, AI will help me do the easy bits, so I can focus on getting the really hard bits right.

Am I behind?

Last July, I wrote down how I build with AI:

Thoughts on building with AI
Those who know me know that I have been an AI skeptic and slow on its uptake. I built a little with GitHub Copilot on VSCode, and it was useful, but it felt like fancy auto-complete. I was also a product manager and didn’t need to write much code. All

And not much has changed since. I still build in small, reviewable increments and stack my PRs so they're easy to review.

But today, the world of harnesses, ralph-wiggum loops have taken over. People are using frameworks like beads to give AI larger, and larger tasks and are opening massive PRs that work. They're writing their own harnesses, using local LLMs, etc.

I feel like I'm lagging behind, but at the same time, I can't get behind these approaches yet.

Code is craft for me. However, I wonder if I can put together processes that can produce stacked PRs for me? Where the output is craft like.