In Evaluating conversations with ChatGPT I wondered about how much we can rely on AI to help us do stuff when it has a somewhat gung ho relationship with the truth. I came across the beginnings of the academic research into this area in Lin et al, 2021. I also found a more recent paper, Wei et al, 2022, that discusses the ways that increased scale of models have produced unexpected step changes in this area and others.
Things are improving shockingly quickly.
I enjoyed, and found a kind of solace, in All Human Systems are Enormous Trash Fires.
Realizing this can be revelatory. Once you recognize that all human systems are enormous trash fires, you stop trying to figure out how to switch to a system that isn’t an enormous trash fire, since they don’t exist. Instead, you ask better questions about your current trash fire. Like, “Am I doing everything I can to contain this enormous trash fire, even though I know it will never go out?”; “Do the people in charge recognize that this whole place is an enormous trash fire?”; and, most importantly, “Am I surrounded by a team of firefighters or a team of arsonists?”
We’re imperfect beings in the extreme, and the organisations we create are as often the sum of the imperfections as they are of our better attributes. But that doesn’t mean we can’t look to have left things a little better each time we step away.
Leaving the enormous trash fire functioning better, just a bit.
When using ChatGPT, I had an idea to ask it to summarise an article. Seeing it do well, I wondered about other uses of summarisation. One item that struck me is using generative AI to improve how we interact with search, for example in apps like Obsidian or Evernote.
It went like this. Search hasn’t changed much in a long time. We’ve got a bit better at ranking results, but the experience of search is a list of results. Each must be examined to see if it answers the question. What if instead search results could be presented in a summarised way? This would be particularly useful for queries whose underlying goal was “tell me what I know about X?”.
For the longest time – well, since before 2010, so over thirteen years, which is pretty much forever in internet time – this site has used extremely old-school CSS. Almost everything in the stylesheet would’ve likely been recognisable to anyone visiting the CSS Zen Garden back in in 2003. I think float
was about the most modern directive. The primary CSS file, centred.css
, has been adapted a couple of times to tweak the design, notably to create a mobile version of the site in 2011, but the core has remained static for a long time.
I found The micromanager’s dilemma a fascinating and valuable read. Matthew Ström applies game theory to explain micromanagement as, potentially, a vital strategy for one’s leadership toolbox.
In this essay, I’ll show you that micromanagement isn’t just a nagging habit; it’s an inevitability. That’s the paradox: micromanagement is both bad management practice and a key component of the best management strategies.
The analysis is by no means exhaustive, and I found some of the inferences less than watertight. It’s certainly not a mathematical proof of an optimal management style! But the article is a great example of applying a mental model from one’s library to a problem and using it to great effect. While talking more specifically about management, I feel there are still lessons that I can draw for my own, more technical, leadership.