Late May Journal: building things I said I didn’t need
I spent my “projects time” in the latter half of May working on my AI apps, Rapport, codeexplorer and a bit on my other ai-toys.
First off, after saying that I wasn’t sure whether it was worth adding tool support to Rapport, I ended up going all the way and adding support for connecting to local MCP servers.
Second, I decided that codeexplorer
deserved to graduate its own repository.
It felt like it had outgrown being part of ai-toys.
Finally, I wrote a streamlit UI around the image generation capabilities of GPT-4o. No more “you must wait 15 hours to make more pictures of gigantic anime cats” error messages in ChatGPT for me!
Rapport
Looking at the last few weeks of commits to Rapport, it seems like I did quite a bit. 30 commits in 20 days.
I set up the code in Rapport so that you can run it from
uv
with:uv run --no-project --isolated \ --with https://github.com/mikerhodes/rapport.git \ rapport
The best thing about this is that I can run an instance of Rapport from the known-good version on
master
on GitHub, so I can ask Rapport questions while I’m breaking my local working copy building new stuff 😂.I’m toying with publishing Rapport as a proper package, but feel a little guilty for using pypi’s resources just to make this command look neater.
Adding MCP support took quite a while, over a week. Because the MCP client I chose used it, I needed to learn Python’s asyncio, which I’d not needed to work with before.
I made a lot of quality of life improvements. My favourite was grouping together uploaded images into rows rather than having them in individual chat message blocks, which needlessly took up a lot of vertical space.
I added the new Claude 4 models, and liberally stole from the Claude prompts to improve the chat “personality” of Rapport.
MCP
Adding MCP to Rapport involved a lot of changes.
- First, I had to coalesce contiguous chat messages from the same conversational “turn” together such that tool calls and responses could be both collapsed together into a single UI element and be displayed inline with the AI model assistant messages. Along with implementing the same for user-provided messages and uploaded files, this really polished the chat display experience.
- Next, I had to build the tool support. Instead of hard-coding tools, I decided
early on to use MCP instead. This allows the user (mostly me, of course, given
I think I’m Rapport’s only user!) to add tools without changing Rapport code,
and to pull in MCP servers from elsewhere.
- I used fastmcp for the client library. Most libraries are server libraries rather than clients, and those that have clients tend to hide them away in the corners of their documentation. So it took a little while to find a few client libraries and choose between them. I guess this makes sense: creating servers must be a lot more common than writing clients.
- I had to add UX to configure MCP servers that Rapport would connect to. In the end, I went with plain JSON, copying the format from Claude Desktop.
- While I was at it, I refactored the code quite a bit, mostly pulling large code files apart into smaller ones. For example, each of the adapters for different AI providers now has its own file.
Python asyncio
asyncio is an event loop coroutine library in Python 3. It’s similar to gevent (which I’d used before) and works the same as nodejs, multiplexing coroutines onto a single thread. As coroutines do IO, they are put to sleep and other coroutines are run. Like node, it can use libuv under the hood. It’s a pretty standard way to do things these days.
I made the classic mistake of trying to hack this out to begin with. I’ve not
really worked with async
/await
APIs much, having mostly worked in older
Python and Go. So I kinda just sprinkled some asyncio.run()
and await
and
async
around the place and it mostly worked. But other times it spewed huge
tracebacks to the console and mangled things in nasty looking ways.
Eventually I took the hint, and consumed two things. First, a series of posts from a BBC team, starting with Python Asyncio Part 1 – Basic Concepts and Patterns | cloudfit-public-docs. Next I sat in bed with a cup of tea and read through a whole bunch of the asyncio docs. After that things became much clearer.
The issue I faced was that streamlit code is synchronous (streamlit runs your
code in its own thread). Interacting with async code from synchronous is
somewhat awkward. In the end I wrote a small worker thread that ran its own
asyncio event loop. Using this, I could submit fastmcp client work onto the
background event loop instead of firing up a new event loop every time I wanted
to run some async code (which is what asyncio.run()
does). I’m not sure it’s
better but it felt cleaner 🤷.
It’s pleasingly simple (albeit pretty ungood with regards error handling so far):
class AsyncWorker:
"""
Create a simple worker thread with its own asyncio.loop so we
can safely call async code in a single event loop.
"""
def __init__(self):
self.loop = asyncio.new_event_loop()
self.thread = threading.Thread(target=self._start_loop, daemon=True)
self.thread.start()
def _start_loop(self):
asyncio.set_event_loop(self.loop)
self.loop.run_forever()
def submit(self, coro: Coroutine) -> Any:
future: Future = asyncio.run_coroutine_threadsafe(coro, self.loop)
return future.result()
def stop(self):
self.loop.call_soon_threadsafe(self.loop.stop)
self.thread.join()
The error handling in the rest of the tools code isn’t too bad, however. As the
submit
method mostly throws back exceptions to the calling code to be handled,
it turns out okay in the end.
codeexplorer
There were two main changes in codeexplorer
world.
First, it got useful enough that I gave it a repository of its own at
codeexplorer. Again, this gave me the opportunity to structure the code so it
can be run with uv
:
uv run \
--isolated \
--with git+https://github.com/mikerhodes/ai-codeexplorer \
codeexplorer \
--provider anthropic \
--allow-edits \
--chat \
--task "Please update the README for this project" \
.
Admittedly, you’ll want to alias that in your shell rather than type it out every time!
I enjoyed getting codeexplorer to write its own README.md
and DEVELOPING.md
,
although I did feel the need to make them a bit less happy-clappy. Once you’ve
worked with AIs for a while it becomes pretty easy to spot AI’s writing. It’s
pretty nice and clear, but there are obvious giveaways like Bulleted Lists That
Start With Bold Terms. And just the way the writing works; it feels … too
smooth somehow. Anyway, “too smooth” isn’t actually that bad for a
DEVELOPING.md
!
I also refactored codeexplorer
to add a chat loop. If you run it with --chat
then you can further prompt the model after it’s completed a task — for
example to get it to tweak some code it’s just written in response to an error.
Overall, codeexplorer
is definitely still really simple.
Aider, which I’ve also used, is way fancier and more
capable. But codeexplorer
hews true to its original goal, showing just how
little code you have to write to make an AI model into a fully fledged coding
helper. I still find its simplicity endearing, and it’s the tool I come back to
when I want to get an AI to write some code for me.
What’s next?
I’m starting to reach a point where my AI tools are good enough. Now Rapport has tool-use and codeexplorer has chat mode, I’ve ticket off most of the things that I felt lacking in day to day use.
I’m not quite sure what I’ll do next. I am starting to hanker a little after doing some more work on toykv, finishing up the range-scan functionality I never got working well. Somehow, the browser tab holding “Programming Rust” is still open after not having written any Rust for over a year now. Perhaps it’s time to blow the dust off and get back to it.
Perhaps something else will catch my eye. So far it’s been a good five months of coding this year!