A neat way of thinking about generative AI for your products

I think this is one of the more concise tellings of how to think about where the current LLM-based crop of AI models can be useful in a product:

There is something of a trend for people (often drawing parallels with crypto and NFTs) to presume that [incorrect answers] means these things are useless. That is a misunderstanding. Rather, a useful way to think about generative AI models is that they are extremely good at telling you what a good answer to a question like [the one you asked] would probably look like. There are some use-cases where ‘looks like a good answer’ is exactly what you want, and there are some where ‘roughly right’ is ‘precisely wrong’.

Building AI products — Benedict Evans

So, the question to ask when looking for a good product fit: is the upside from shortening the time it takes to get something that looks right (and can be fixed up) greater than the downside of inaccuracy or downright falsehoods?

Some other recommended reading from the same author:

  • Apple intelligence and AI maximalism — Benedict Evans

    But meanwhile, if you step back from the demos and screenshots and look at what Apple is really trying to do, Apple is pointing to most of the key questions and points of leverage in generative AI, and proposing a thesis for how this is going to work that looks very different to all the hype and evangelism.

  • Looking for AI use-cases — Benedict Evans

    We’ve had ChatGPT for 18 months, but what’s it for? What are the use-cases? Why isn’t it useful for everyone, right now? Do Large Language Models become universal tools that can do ‘any’ task, or do we wrap them in single-purpose apps, and build thousands of new companies around that?

← Older
How SSH works, under the hood
→ Newer
Journal July 2024: Helix, dprint, ClickHouse and tree shapes