Maggie Appleton talks about something close to my heart: how can we use AI to help us think, rather than do the thinking for us?
But can’t we add a smidgeon of the harsh professor attitude into our future assistants? Or at least the option to engage it?
Sure, we can do this manually, like I did with Claude. But that’s asking a lot of everyday users. Most of whom don’t realise they can augment this passive, complimentary default mode. And who certainly won’t write the optimal prompt to elicit it – one that balances harsh critique with kindness, questions their assumptions while still being encouraging, and productively facilitates a challenging discussion. Putting the onus on the user sidesteps the problem.
Professor Bell and I are both frustrated that there is no hint of this critical, questioning attitude written into the default system prompt. Models are not currently designed and trained with the goal of challenging us and encouraging critical thinking.
– A Treatise on AI Chatbots Undermining the Enlightenment
I really enjoy her “harsh professor” prompt, and am considering whether to use it in rapport.