Every Django developer using AI coding tools hits this wall within the first week. The assistant writes a clean ORM query, a well-structured serializer, or a migration that works perfectly. You come back the next day, ask it to extend that same code, and it contradicts itself. It generates a query that violates the schema it defined yesterday, a serializer that conflicts with the model it wrote last Tuesday, a migration that fights the one it created an hour ago. This is not a bug in your setup. It is a structural property of how every current AI coding assistant handles state.
Pithy Cyborg | AI FAQs – The Details
Question: Why does my AI coding assistant keep breaking Django ORM queries and migrations it wrote itself, and what is the statefulness problem that makes AI tools unreliable on production Django APIs?
Asked by: Claude Sonnet 4.6
Answered by: Mike D (MrComputerScience) from Pithy Cyborg.
Why AI Coding Tools Have No Memory of What They Built
Every AI coding assistant, Claude Code, Cursor Composer, GitHub Copilot, and GPT-4o in VS Code, operates from a context window that resets or degrades across sessions. The model does not maintain a persistent internal representation of your codebase between conversations. It constructs a temporary understanding of your project from whatever files are in its context window at the moment you ask the question.
This matters acutely for Django projects because Django’s architecture is deeply stateful by design. Your models define your schema. Your migrations are a sequential history of schema changes. Your serializers depend on your models. Your views depend on your serializers. Your URLs depend on your views. Every layer has precise dependencies on every other layer, and those dependencies accumulate over time in ways that a context window snapshot cannot fully capture.
When you ask an AI assistant to extend a Django API on day five of a project, it is not reasoning from a complete mental model of everything it helped you build on days one through four. It is reasoning from whatever subset of your codebase fits in the context window you opened that session, plus its general training knowledge about Django patterns. If the migration history, the current model state, and the existing serializer structure are not all explicitly in context simultaneously, the assistant generates code that is locally plausible but globally inconsistent with what already exists.
The model is not forgetting. It never knew in the first place. Each session is a fresh inference from partial information.
The Three Django-Specific Failure Modes This Produces
The statefulness problem manifests in three recurring patterns that Django developers using AI tools hit repeatedly, in roughly this order of frequency.
Migration conflicts are the first and most destructive. An AI assistant asked to add a field to a model generates a migration without awareness of the current migration graph. If your context window did not include the full migrations directory, the assistant generates a migration with a dependencies list that references the wrong parent, produces a duplicate operation already handled in a previous migration, or creates a field with constraints that conflict with data already in the database. Django’s migration framework detects some of these at apply time. Others pass silently and corrupt your schema state in ways that only surface under specific query conditions.
ORM query drift is the second. An assistant that helped you define a model with a specific related_name, a custom manager, or a select_related optimization on day one will generate queries on day five that ignore those design decisions entirely. It queries the default manager when you have a custom one. It uses a reverse relation name that was overridden. It generates an N+1 query pattern on a relationship you already optimized with prefetch_related because the optimization lives in a view file that was not in the current context window. The query runs. It returns correct results. It is silently three times slower than the version the same assistant would have written if it remembered what it built.
Serializer and model divergence is the third. As models evolve, serializers that the AI assistant wrote in earlier sessions become stale in the assistant’s context but not in your codebase. The assistant regenerates or extends a serializer based on its understanding of the model, which reflects an earlier state. The result is a serializer that omits new fields, incorrectly types changed fields, or applies validation logic that the model has since superseded. This class of error is particularly insidious because serializers that are subtly wrong often pass unit tests while failing on edge cases in production data.
How to Actually Fix the Statefulness Problem in Your Django Workflow
The fix is not switching tools. Every current AI coding assistant has this problem because it is a property of context window architecture, not a specific product failure. The fix is a context management discipline that compensates for what the tools cannot do themselves.
A Django-specific context file that you maintain and include in every AI coding session is the highest-leverage single habit change. This file contains your current model definitions, your migration graph tail (the last three to five migrations), your active serializer classes, and any custom managers or querysets. It is not documentation. It is a machine-readable state snapshot that you paste or include at the start of every session involving that part of the codebase. Cursor’s notepads feature and Claude Code’s CLAUDE.md file are both designed for exactly this pattern. Most developers use them for project setup instructions. The higher-value use is current schema state.
Constraining the AI’s scope per session to a single Django app rather than the full project dramatically reduces the surface area where context gaps can produce inconsistencies. An assistant working on your users app with full context of that app’s models, migrations, and serializers produces far more reliable output than one working across your entire project with partial context of everything.
Running ./manage.py migrate --check and ./manage.py validate as mandatory steps after every AI-generated migration before committing catches the migration conflict class of failure immediately. This is obvious in retrospect and consistently skipped under time pressure. Make it a pre-commit hook rather than a manual discipline.
What This Means For You
- Create a CLAUDE.md or Cursor notepad file for each Django app containing current model definitions, the last five migration names, and active serializer classes, and include it in every coding session involving that app rather than assuming the assistant remembers what it built.
- Scope each AI coding session to a single Django app with complete context of that app rather than opening your full project with partial context of everything, because the assistant’s consistency degrades proportionally with the ratio of what exists to what it can see.
- Add
./manage.py migrate --checkas a pre-commit hook so migration conflicts from AI-generated code are caught at commit time rather than at deploy time or, worse, after applying to a production database. - Treat every AI-generated ORM query touching a relationship as suspect until you have verified it against your current model definition and confirmed it uses the correct manager, related name, and prefetch strategy, because relationship queries are the failure mode most likely to pass testing while degrading silently in production.
