Evidence that LLMs are reaching a point of diminishing returns

In Evidence that LLMs are reaching a point of diminishing returns - and what that might mean, Gary Marcus shows evidence that the widespread view which holds that AI capability is increasing exponentially may be ill-founded:

And here’s the thing – we all know that GPT-3 was vastly better than GPT-2. And we all know that GPT-4 (released thirteen months ago) was vastly better than GPT-3. But what has happened since?

I could be persuaded that on some measures there was a doubling of capabilities for some set of months in 2020-2023, but I don’t see that case at all for the last 13 months.

Instead, I see numerous signs that we have reached a period of diminishing returns.

For the opposing viewpoint, get a coffee (or perhaps several) and read the much longer Situational awareness.

← Older
Journal July 2024: Helix, dprint, ClickHouse and tree shapes
→ Newer
Gradually, then Suddenly (AI thresholds)