
Many companies are racing to implement AI in the form of agentic workflows, automated processes, and AI-powered products and services. Many of these implementations are built into the core of how these companies operate. Yet, few understand or have projected the rate at which a behind-the-scenes technical debt is compounding, an unanticipated structural cost for on-going maintenance of AI implementations. Consequently, few firms have budgeted for this tech-upkeep, but it’s coming soon.
We’ve built AI-powered tools and automations at my firm using specific AI models. Some internal. Some that our customers use. We invested significant time and resources testing, tuning, and calibrating the service specifically for that model’s quirks and capabilities. However, the model we chose (in one instance) will be retired later this year (October 2026).
The challenge isn’t the 10-minute migration switch to point our API to a new AI model. The expensive part that gives rise to the mounting technical debt is all the testing that should be completed after we switch to a new AI model. Every workflow we built around the old model must be validated against the new one.
Because each AI model has its own set of strengths, weaknesses and quirks, and because AI is a probabilistic process (rather than deterministic), outputs are likely to change with AI model updates. In other words, what worked reliably before may not work the same way with a new model.
Consequently, companies must test to ensure the new model is still useful… for hundreds of cases. And that work is laborious, time-consuming, and not really optional.
This Is Not Y2K Over Again
An analogy that comes to mind is the Y2K problem, where an early technical shortcut created an unforeseen crisis decades later.
Y2K, short for “Year 2000,” was a widespread software problem rooted in how early programmers stored dates. To save memory, which was expensive and scarce in the early days of computing, dates were stored using only two digits for the year rather than four. Consequently, 1975 was stored as “75.” The problem was that when the calendar rolled over to the year 2000, systems would read the year as “00” and potentially interpret it as 1900, causing calculations, transactions, and automated processes to fail or produce wildly incorrect results. Because this date logic was baked into decades of legacy code running everything from banking systems to power grids to government infrastructure, the fix required an enormous global effort to audit, identify, and manually update millions of lines of code across virtually every industry. This was an expensive technical debt problem to rectify. Further, the deadline was immovable. January 1, 2000 was coming regardless of whether you were ready.
Much like the deprecation of an older AI model, Y2K was a real, forced migration event with a hard deadline.
But Y2K happened once.
With AI models, the deadline becomes a series of deadlines. It’s recurring.
Further, with the pace of AI development, today’s capable model is tomorrow’s legacy system. AI providers will continue to release new models that leapfrog the previous ones. And when they do, the older models eventually get retired. The API stops working. Companies will either migrate or see their AI-enabled services and automations go dark.
While Y2K produced a single deadline, AI model deprecation produces a series of deadlines. That changes how firms need to plan for it.
The AI Velocity Tax
The faster AI models iterate and improve, the faster the deprecation cycle turns, and the more frequently organizations are forced to undergo these migrations. This is the AI Velocity Tax.
Every migration means regression testing, validation, and potentially rearchitecting workflows built specifically for the old model. The better and faster the AI ecosystem moves, and it is moving remarkably fast, the higher the ongoing maintenance burden for everyone who has built on top of it.
Compounding Factors
The velocity tax is a function of:
- How frequently AI models are deprecated. This part is outside your control, determined by the providers whose platforms you build on.
- How deeply you have embedded agentic workflows into your systems, your processes, and products.
If you built a simple AI feature that summarizes a document, your migration burden is relatively low. But if you have built agentic workflows that were tuned, calibrated, and tested specifically for a model’s quirks and capabilities, workflows that are core to how your business operates or core to a service you deliver to customers, your exposure may be significant. The deeper the integration, the higher the velocity tax you pay every time a model turns over.
If the AI capability is customer-facing, the stakes are higher still. It is not just internal operational disruption. It is potential service degradation to the people who have paid for the service you built.
What This Means Going Forward
I am not writing this to argue that companies should slow down their AI adoption. The competitive advantages are real. The capabilities are genuinely remarkable. The companies that adopt AI have significant operational leverage over those that do not.
But the economic model for AI-powered operations needs to mature. Right now, most organizations are booking AI implementation as a capital event. Build it. Deploy it. Move on to the next one. The velocity tax adds an on-going future operating expense for model maintenance and redeployment that grows in proportion to how aggressively companies have built AI into their internal workflow and the services they offer.
The companies that will navigate this best:
- Treat AI workflow maintenance as a recurring operational budget line, not an afterthought.
- Build AI-enabled tools and services with future migration in mind, modular, testable, and well-documented.
- Understand that the model their agentic workflow runs today is not the model it will run in two years (or less).
We are all going to be doing AI model migration work. The question is whether we are planning for it.
Related posts:
Follow Past Midway if you would like an email notification when I post something new.
FOOTNOTES: