How does SSH’s ProxyCommand actually work?

I have been writing support for SSH’s ProxyCommand into a product at work over the past month or so. I find protocols fascinating, particularly those as old as / with the pedigree of SSH.

ProxyCommand is an interesting corner in the set of (defacto) SSH standards. It is used to establish a secure connection to a destination host, where that destination host cannot be directly accessed by the local machine. It is mostly used when you want to tunnel through a bastion host. It specifies a command that your SSH client executes to create a proxied connection — hence the name — to the destination host through the bastion.

I’m writing about it because my experience over the last few months suggests that ProxyCommand can be slightly mind-bending. Certainly it took me a little while to wrap my head around what is going on.

So let’s take a look at how it works.

Read More…

Gradually, then Suddenly (AI thresholds)

Gradually, then Suddenly: Upon the Threshold is a good piece to contrast with my previous link to a post that considered the evidence that LLMs may be plateauing

We know AI is a general purpose technology - it will have wide-ranging effects across many industries and areas of our lives. But it is also flawed and prone to errors in some tasks, while being very good at others. Combine this jagged frontier of LLM abilities with their widespread utility and the concept of capability thresholds and you start to see the development of LLMs very differently. It isn’t a steady curve but a series of thresholds that, when crossed, suddenly and irrevocably change aspects of our lives.

Evidence that LLMs are reaching a point of diminishing returns

In Evidence that LLMs are reaching a point of diminishing returns - and what that might mean, Gary Marcus shows evidence that the widespread view which holds that AI capability is increasing exponentially may be ill-founded:

And here’s the thing – we all know that GPT-3 was vastly better than GPT-2. And we all know that GPT-4 (released thirteen months ago) was vastly better than GPT-3. But what has happened since?

I could be persuaded that on some measures there was a doubling of capabilities for some set of months in 2020-2023, but I don’t see that case at all for the last 13 months.

Instead, I see numerous signs that we have reached a period of diminishing returns.

For the opposing viewpoint, get a coffee (or perhaps several) and read the much longer Situational awareness.

Journal July 2024: Helix, dprint, ClickHouse and tree shapes

A few quick-fire notes which might be of interest to others now, and myself in the future.

After I started to use it a year ago, I wasn’t sure how long I’d continue to use the Helix editor. And yet here I am writing this post in Helix, and still using it at work. It’s crashed four or five times — in a year — but has overall proven very stable and capable. I think it’s dev progress is a bit slower than I’d like, but really, I’m very happy with the editor. It starts instantly, LSP+tree-sitter still proves a winning combination, and the improvements that have arrived are solid.

One thing I’ve been searching for is a fast formatter for web languages, specifically the ones used in Hugo and Jekyll sites. Markdown, templated HTML and CSS in the main. Tools like Prettier tend to be noticably slow in kicking in to format if one isn’t willing to pay the price of a constantly running server. I’ve been using deno fmt for a while for Markdown, but it doesn’t do CSS or HTML. So now I’m trying dprint, which has inbuilt formatting for all three languages I wanted. It turns out that deno fmt actually uses some dprint formatters under-the-hood, specifically Markdown. I like finally having a CSS formatter, although since I moved to Tailwind this has been less important. (I still really like Tailwind).

In August 2023’s journal, I mentioned using ClickHouse in a PoC. That PoC became production recently, and we now have over 100TB of data stored in ClickHouse after our pre-production ramp up. We ingest more than a billion rows a day. Throughout our build out, ClickHouse has continued to impress me, coping with each bump in data volume smoothly. Querying has remained efficient. We may need to bump our hardware a bit as we start using it more in earnest, but the simple, vertically-scaled, replicated architecture we are using seems solid 🤞

We went for a walk in a small piece of woodland near Bristol today, Leigh Woods. I loved the shapes within the branches of this tree:

A neat way of thinking about generative AI for your products

I think this is one of the more concise tellings of how to think about where the current LLM-based crop of AI models can be useful in a product:

There is something of a trend for people (often drawing parallels with crypto and NFTs) to presume that [incorrect answers] means these things are useless. That is a misunderstanding. Rather, a useful way to think about generative AI models is that they are extremely good at telling you what a good answer to a question like [the one you asked] would probably look like. There are some use-cases where ‘looks like a good answer’ is exactly what you want, and there are some where ‘roughly right’ is ‘precisely wrong’.

Building AI products — Benedict Evans

So, the question to ask when looking for a good product fit: is the upside from shortening the time it takes to get something that looks right (and can be fixed up) greater than the downside of inaccuracy or downright falsehoods?

Some other recommended reading from the same author:

  • Apple intelligence and AI maximalism — Benedict Evans

    But meanwhile, if you step back from the demos and screenshots and look at what Apple is really trying to do, Apple is pointing to most of the key questions and points of leverage in generative AI, and proposing a thesis for how this is going to work that looks very different to all the hype and evangelism.

  • Looking for AI use-cases — Benedict Evans

    We’ve had ChatGPT for 18 months, but what’s it for? What are the use-cases? Why isn’t it useful for everyone, right now? Do Large Language Models become universal tools that can do ‘any’ task, or do we wrap them in single-purpose apps, and build thousands of new companies around that?