How to even keep up with explosive AI progress?
One year after DeepSeek crashed U.S. markets
One year ago, DeepSeek’s R1 release crashed US equity markets. NVIDIA shed ~$600B in a single day.
Investors were suddenly questioning: had Chinese AI labs caught up, were moats were illusory, and all that CapEX might be wasted?
A year later, AI adoption is exploding, costs are dropping as all labs push the Pareto frontier (best efficiency for best performance).
And if you’re building with AI, your expectations from three months ago (in November 2025) are probably already outdated.
AI Adoption Explodes
Back in June 2025, I wrote that ~1 in 10 (~10%) U.S. businesses used AI. Last month in December 2025, the US Census Bureau measured AI usage at ~18% of businesses. Modest improvement?
Ramp’s credit card data tells a different story: ~47% of businesses are now actively spending on AI tools.
But the real signal is from AI lab’s revenues. Epoch AI surveyed 421 forecasters at the start of 2025. For 2025, they predicted $16B in combined annualized revenue for OpenAI, Anthropic, and xAI.
The actual number? $30.4B.
Compared to the end of 2024, OpenAI and Anthropic had about ~$6.4M annual revenue run rate, $30.4B is a ~4.8x annual revenue in 1 year!
It keeps getting cheaper too: open and closed source models
Now the supply side. Every lab is pushing the Pareto frontier, better models at lower cost, faster than anyone expected.
Near the end of 2025, Google’s Gemini 3 model series has retaken the Pareto frontier.
How did it happen?
DeepSeek showed the way. When I analyzed their inference economics last March, frontier AI labs were running 80%+ gross margins on their API business. DeepSeek’s pricing was near break-even, everyone else was (or is still) printing money.
But DeepSeek did something else: they open-sourced their inference infra techniques and concepts.
Now, open source inference software (like vLLM) are integrating these learnings directly, diffusing the learnings to anyone with access to GPUs (not solely to labs with billions of capital).
HackerNews folks ran some back of the napkin math on the cost economics (of the vLLM release) coming with ~$0.30/1M output tokens at ~2k tokens/second (very very fast) running DeepSeek’s latest model.
Compared to DeepSeek’s public API price of $0.42/1M output token and Gemini 3 Flash’s $3/1M output token, it is possible for even small and medium sized enterprises can in-source their own LLM inference and beat 1st party pricing.
DeepSeek’s innovations didn’t just help DeepSeek. They raised the floor for everyone.
Claude Code psychosis… just keep building
Demand doubled versus forecasts. Costs continue to plummet. The Pareto frontier moves measurably in 3-6 months.
For builders, you have to reset your expectations every few weeks. Claude Code’s Opus 4.5 integration took the AI world by storm in December.
The model you’re building on will be obsolete in ~6 months, it is a feature. Not a bug.
I’ll leave others to debate whether there’s a valuation bubble or the benefits of one.
Today’s goal isn’t value capture, it’s value creation. The labs are competing on who can make intelligence cheaper, faster, more accessible.
For founders, this is the best possible environment. The tools get better every month. The costs keep dropping. The adoption keeps accelerating.
My tip is to follow smol.ai’s AINews email newsletter, aggregates signal from the noise every day.
Embrace the Claude Code psychosis?



