Post
Speed is not everything

My thoughts turn to the analogy of a car.

A car can travel at 140mph. But if I went that fast I’d drive into a wall. Driving at the maximum speed of the car is counter-productive. Instead, I have to remain at human-compatible speed.

AI codes at 140mph. I believe that’s not human-compatible. Perhaps we will find a way to harness that speed; to make it human-compatible. More likely we will find that certain tasks can go that fast, and others can’t. 140mph is okay on a race track but not on a back street.

We do not have to force ourselves to adapt to the speed AI can generate code at. We do not have to travel at theoretical maximums. Driving at normal speeds is still much faster than walking. In the same way, we can write code faster with AI help, even if that help is not at top speed.

If we don’t insist on top speed, AI code might even be better than we’d write alone.

Wouldn’t that be nice?

Link
Yes, and…

I’ve make this point in work Slack a few times. Carson Gross says it better though:

Some people say that the move from high level languages to AI-generated code is like the move from assembly to high level programming languages.

I do not agree with this simile.

Compilers are, for the most part, deterministic in a way that current AI tools are not. Given a high-level programming language construct such as a for loop or if statement, you can, with reasonable certainty, say what the generated assembly will look like for a given computer architecture (at least pre-optimization).

The same cannot be said for an LLM-based solution to a particular prompt.

I think it goes a bit deeper than determinism, however. In some way, high-level , assembly and machine code are merely alternate representations of the same thing. They encode the same information. If the CPU could directly run Rust code, the same thing should happen as when it runs the compiled machine code.

Compilation is translation that, at best, preserves existing information; it may even discard some. In contrast, LLM’s create new information from their inputs. Lots of it. This is what makes probabilistic generation a very different beast to deterministic compilation of a high-level language.

I can see where the simile comes from — “human language is a higher level of abstraction” — but it’s not simply a higher level of abstraction. It is a completely new mechanism to create programs. Instead of translation, it’s generation.

Code produced today will differ from code produced yesterday for the exact same prompt and model. And that non-determinism is the essential aspect of the LLM paradigm; it gives the approach its excitement, its power and its weakness.

This is why we don’t know what to do with it. If it was merely a higher level abstraction, well, we’ve done that a hundred times before.

Post
How I use AI in early 2026

My last post captured my feelings and thoughts on AI. It’s also worth capturing how I use AI, specifically GenAI (as if that needs saying at this moment).

If I were to place myself on the Steve Yegge Gas Town 8 Point Scale of AI programming, I’d place myself between 5 and 6. I run Claude Code in the background and regularly give Claude deep research assignments.

At work, IBM has only just granted me access to its coding agent, IBM Bob. But I’ve been able to start quickly by using my experience using Claude Code at home.

Read More…

Post
My feelings on AI; scribbled for my future self

The pace of change in the last year has been relentless. I write this post not for any kind of thought-leadership, but instead so I can reference it in a few months or a few years, and see how I feel now.

At home I use Claude Code a reasonable amount. At work I have recently got access to IBM Project Bob. I use the shell version of Bob. Claude Code is more advanced, but Bob does the job. In short, I finally have a coding agent at work. Before Bob, IBM disallowed coding agents, and so I was limited to using Claude online, for tasks like research and coding queries.

Read More…

Post
Found 2022 thoughts on AI

Sometimes you come across things that you wrote not so long ago that merely serve to reinforce how fast the world has moved.

Such as this from November 2022, just as ChatGPT was released to the world and I was getting my head around foundation models and what they were:

The connection that I see is that perhaps a foundation model trained for text summarisation could be specialised on a corpus of notes and used to summarise search results, progressing us beyond the “list of results”. Instead of answering the question “show me the notes containing X”, we can more directly ask our likely underlying question “what have I learned about X over time”.

I imagine an answer that summarises, over time, one’s notes, including references to source notes – for digging deeper – as it goes. In essence, can one use machine learning as a virtual librarian for one’s notes? While text summarisation is a poor man’s librarian, very few people are rich enough to afford a real human to look through their old notebooks! I can see this ability to summarise becoming a cornerstone of a workflow, and a way to start combining the best parts of incremental and evergreen note-taking models.

I can imagine taking this a further step forward with more advanced versions of models like ChatGPT, where one could hold conversation with the model to request more information on given topics in the summary. In this way, we take a step towards the types of interactions we see within shows like Star Trek, with a conversation tailored to one’s immediate needs.

How much more we can do today, just three years later: I have been using Claude to write a vector search for my Obsidian notes. AI is not just writing a summary, but is coding the application to use AI to write a summary 🙃