Note on depreciating AI and appreciating data
This note adds a bit more nuance to the notion of AI depreciation/data appreciation, the interdependence between models and data contexts and the management requirements to realise the appreciating value of data.
AI as a depreciating asset
Base AI models look like they're depreciating rapidly: inference costs have fallen ~300x since 2022, open‑source alternatives erode proprietary moats, and new architectures make frontier models (e.g. GPT‑4 equivalents) obsolete within quarters. Unlike traditional software, models face built‑in obsolescence from scaling laws, compute commoditisation and competition that resets the performance floor frequently.
Investors assess AI models with depreciation cycles of 6–18 months, requiring ongoing retraining and hardware refresh, similar to leasing high‑tech equipment rather than owning durable IP. This is a reason why AI capex is so high relative to revenue—it’s not just scaling but sustaining a position against rapid erosion.
Data context as an appreciating asset
Proprietary data networks—customer behaviours, workflows, domain knowledge, feedback loops etc—are hard to replicate and can appreciate as they grow richer, more representative and better structured. Fine‑tuning on business context creates defensible application‑level that outlast base models, as seen in enterprise AI where the real edge is “your data + your rules” rather than the LLM itself.
Data doesn’t automatically appreciate; it can depreciate from staleness, bias creep, regulatory changes or poor governance. There's a lot of investment needed to maintain an appreciating data asset: curation, labelling, refresh, quality monitoring and ethical/legal compliance to keep data fit for purpose over time. In practice, this is often the larger cost than models themselves.
Key interdependence and caveats
A commoditised model using rich, fresh business data often outperforms a frontier model on stale generic data, but the reverse is also true—data value depends on access to good models for extraction and insight generation. The appreciating network emerges from feedback loops: data → models → actions → new data.
Public and regulated sectors face extra depreciation on data from privacy rules and consent changes (perhaps one reason why many technology leaders resist regulation), while competitors can sometimes reverse‑engineer context via similar public data plus clever prompts. Forward‑looking frameworks treat AI assets as modular stacks: depreciating models as a service layer atop appreciating data assets, with governance ensuring portability and freshness.