On Quality and Craft When Building with AI
It is not the AI's fault you're shipping low-quality work with your agent. It's yours.
It is not the AI's fault you're shipping low-quality work with your agent. It's yours.
That's like blaming your word processor for your lack of creativity and grammar.
LLMs mostly have no opinions. They lack perspective. They know a large subset of human knowledge, but they don't know what's important. They are trained this way, so they can be used by many people in many situations. It's up to use to tell the model what's important and how we want it to think about the world.
You, your worldview, and the resulting values are what determine what a model should do.
Here's the kicker: you have to tell the model what you value.
Without setting the stage, the model guesses. It gives you a reasonable answer given every bit of competing information it has.
There are many conversations happening amongst programmers about slop and the code produced by agents. Does using an agent to program guarantee the result will be slop? Can you actually create great quality software with AI?
Slop is not a uniquely agentic problem. I have joked for a long time now about the "goop" I would find in projects. Complicated, poorly factored, hard to maintain pieces of code. People are just as capable of making slop. Have you seen what the ultra low cost freelancers in the Shopify world produce? The difference with the slop produced by AI is that it happens at a faster rate. The mere ability for the model to output code at a far greater rate than a human means that low quality code is created at a faster rate.
Remember, what I said earlier about values? The antidote to slop is care and attention. Building excellent, high craft products is a tremendous effort. You have to sweat the details and you have to do the work to build a coherent system.
That we should expect the model to immediately output the most high quality software on the first shot is missing the point. Can you do that? No, you explore. You iterate. You refine. You delete. You learn and you work the problem. Or you don't and you make slop.
I think software developers are falling for a fallacy that because AI is very intelligent and has near perfect recall of the entirety of human knowledge it knows what it should do. It's a hammer, smart hammer, but it doesn't know which nail to pound.
Models are tools. They are powerful, intelligent tools that are only most effective when we tell them what they should care about.
Just like we would our human teammates.
Previous Article
On Building Canada's Future