

The vast majority of my experience was Claude Code with Sonnet 4.5 now Opus 4.5. I usually have detailed design documents going in, have it follow TDD, and use very brownfield designs and/or off the shelf components. Some of em I call glue apps since they mostly connect very well covered patterns. Giving them access to search engines, webpage to markdown, in general the ability to do everything within their docker sandbox is also critical, especially with newer libraries.
So on further reflection, I’ve tuned the process to avoid what they’re bad at and lean into what they’re good at.
A later commenter mentioned an AI version of TDD, and I lean heavy into that. I structure the process so it’s explicit what observable outcomes need to work before it returns, and it needs to actually test to validate they work. Cause otherwise yeah I’ve had them fail so hard they report total success when the program can’t even compile.
The setup I use that’s helped a lot of shortcomings is thorough design, development, and technical docs, Claude Code with Claude 4.5 Sonnet them Opus, with search and other web tools. Brownfield designs and off the shelf components help a lot, keeping in mind quality is dependent on tasks being in distribution.