I ran into this recently on a fairly involved software project where we were using AI heavily across multiple stages of the work.
The project had real complexity. Existing constraints. Existing conventions. Edge cases. Integration points. Output expectations. The sort of work where you do not want the AI freelancing.
So we did what careful developers do.
We added guidance.
We added rules, notes, reminders, and supporting material to help the AI stay inside the lines. And to be fair, some of that absolutely helped. It reduced drift. It improved consistency. It made the work safer.
But over time, something else happened too.
The instruction environment began to thicken.
Each addition made sense on its own. None of it felt wasteful. Every new note or rule we added came from a real lesson, a real mistake, or a real need. That was the trap. Because the more justified the additions were individually, the easier it was to miss the cumulative effect.
At a certain point, I realized the AI was no longer just helping with the project.
It was also navigating the growing pile of guidance around the project.
That changed the way I thought about the workflow.
The problem was not that the rules were wrong. The problem was that we had started to create an AI junk drawer. We were not just giving the model what mattered now. We were also giving it a widening layer of cautions, process notes, reminders, and accumulated lessons that all had to be mentally sorted before the task could stay clean.
That has a cost.
Not always an obvious one. The AI can still produce useful work in that kind of environment. That is part of what makes the problem easy to miss. The system still works. But it starts carrying extra friction. Priority gets blurrier. Local rules start competing with foundational ones. Temporary fixes hang around longer than they should.
The environment gets heavier.
That was the turning point for me.
I stopped thinking only in terms of what else we needed to add so the AI would do better. I started thinking more in terms of what belonged in scope for this task, and what was only hanging around because it once seemed helpful.
That is a different mindset.
It is the difference between accumulation and curation.
The lesson was not that guidance is bad. The lesson was that improving AI performance is not just about adding more support material. It is also about knowing when added support starts becoming clutter.
That is why I keep coming back to the workbench metaphor.
A good workbench is prepared for the current job.
A junk drawer contains useful things too, but it makes you dig.
AI workflows can become junk drawers much faster than they look.
And once you see that happening, the next improvement is often not another rule.
It is better context design.
Related guide
For the broader framework behind this, see The AI Junk Drawer Problem.