3 min read

Ducks, time-travelers, and juniors

The two most prominent metaphors for working alongside ChatGPT so far are “autocomplete” and “tutor”. The autocomplete use case being exemplified by GitHub Copilot, and the tutor use case being…just asking ChatGPT to explain things to you.

Here are a few more I’ve come across recently that feel compelling and reflect experiences I’ve had myself.

…as a rubber duck

Rubber ducking is the programming technique of talking aloud through a problem. Simply discussing the problem out loud to yourself (or a rubber duck) is often enough to grant you a breakthrough.

Ryan Singer describes a similar process using ChatGPT:

The a-ha moment (see thread) came from writing the prompt not getting the answers. Trying to articulate what I wanted and comparing against the "conventional wisdom" of the LLM helped me get to an innovative idea

LLMs in some sense are a great embodiment of “conventional wisdom.” Using that perspective as a tool can help you understand what the default path nearest to where you’re walking is.

Often when you’re being clever you’re clever in too many ways or across too many parameters. LLMs can help you orient within convention to see where you should veer away from norms, and where it’s best to stick to them.

…as a time traveler

Height, a project management app, released their copilot feature this week. Here’s a tweet with a demo video:

In the demo a bug report with very little context is filed: “Input text goes offscreen when creating a new task on iOS app.” From there, copilot asks a few clarifying questions:

“Is this issue occurring on a specific iOS device or version?” “Does this issue occur in both portrait and landscape orientation?”

When you’re knee deep in a problem, it can be hard to see all the context that you’re embedded within. This is similar to the experience of coming back to old code you had written six months ago and wondering “what was that guy thinking?” Even your own thinking stops making sense when you lose the context that it occurred within.

AI in this case acts as a context-less questioner. It’s a future you, time traveling back to ask the basic questions you’ll wish you had captured six months from now.

…as a junior employee

Martin Fowler captures how his coworker, Xu Hao, prompts ChatGPT for feature code.

Most people default to asking ChatGPT questions as if it were StackOverflow. It’s quite good at this, but this post is a fascinating account of approaching LLMs from a much higher level of abstraction.

My take away from this discussion was that using chain of thought and generated knowledge prompting approaches can be a significantly useful tool for programming. In particular it shows that to use LLMs well, we need to learn how to construct prompts to get the best results. This experience suggests that it's useful to interact with the LLM like a junior partner, starting them with architectural guidelines, asking them to show their reasoning, and tweaking their outputs as we go.

With enough guardrails (and using technology popular enough that ChatGPT knows what it's talking about), we can get a sketch of what we’re looking for.

At a certain point the effort of describing those guardrails is greater than the effort of just building the thing you’re looking for in the first place. Which is hilarious, because that’s the largest criticism of Fowler’s regular work as well (i.e. the effort of using “Agile”, “Extreme Programming”, and other methodologies he espouses are greater than the shortcomings of the work produced without them).

…as an accountability partner

This is a more hypothetical use case that occurred to me this past week, but it wouldn’t surprise me if some businesses spring up providing this.

Y Combinator is known for having a pretty standard set of advice. Business mantras of a sort. You'll occasionally see people online say something in the vein of: “why would I do YC? They're just telling you the same basic stuff they always say."

But that’s how most things work: the steps to doing hard things are often not that complicated. They don’t require grand acts of brilliance but instead, relentless adherence to the basic principles of what works. And those principles, while often basic, can be hard to hold in your head all at once. A YC partner is often just re-presenting that information in a contextualized way specific to your situation.

I suspect there’s a way to prompt ChatGPT to role-play a YC partner. You could tell it about what you did at work this week, and it’d give you some helpful feedback. It’d integrate the bog-standard principles and advice, but contextualize it to your specific situation.

Free business idea!