How DeepSeek Exposed AI’s Biggest Lie And Why Exabits Stands to Win Big
By Mark Fidelman
Last updated
By Mark Fidelman
Last updated
DeepSeek, a relatively unknown AI company from China, just pulled off what many thought was impossible; training a ChatGPT-4-level model for only $5.6 million.
For comparison, OpenAI and Anthropic are burning through billions to train their models.
The immediate reaction? Panic.
Investors dumped Nvidia stock, Big Tech scrambled for answers, and everyone started asking:
“Wait…do we actually need as much compute power as we thought?”
But that’s the wrong question.
DeepSeek didn’t prove that AI compute demand is shrinking. In fact, it proved the opposite.
Thanks to Jevons’ Paradox, which shows that increasing efficiency leads to higher demand, we’re about to enter an era where AI compute needs will skyrocket—1000X beyond what we see today.
And if you think companies like Google, Amazon, and Microsoft should control that future, keep reading.
In the 19th century, economist William Jevons discovered a counterintuitive truth:
When steam engines became more efficient, coal consumption didn’t decrease—it skyrocketed.
Why? Because lower costs mean higher adoption.
Now, apply this to AI compute:
✔ Cloud storage used to be expensive. Companies stored only what they needed.
✔ Then, Amazon, Google, and Microsoft made it cheaper. Now we hoard 2.5 quintillion bytes of data every day.
✔ When something becomes more accessible, demand explodes.
That’s exactly what’s happening with AI compute.
✔ Startups that couldn’t afford AI before? Now they can.
✔ Businesses that hesitated to automate? Now they won’t.
✔ Entire industries that didn’t have AI use cases? Now they’ll find them.
And that means GPU demand is about to hit unprecedented levels.
Morgan Stanley predicts AI compute demand will double every six months. At this rate, by 2030, training an AGI-level model could require the same energy output as a small country.
So where will all this compute power come from?
Right now, if you need high-performance GPUs for AI, you have three options:
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Azure
But here’s the problem:
1. They Only Serve Themselves and Big Clients
AWS, GCP, and Azure hoard the best GPUs, Nvidia H100s, H200s, and other high-performance chips for their own AI projects or sell them to elite customers like OpenAI.
A friend at Google Cloud admitted that H100s are essentially out of stock; they don’t even have enough for internal use, let alone small businesses or independent developers.
2. High Prices Lock Out Smaller Companies
If you can get access, you’ll pay through the nose:
✔ AWS charges ~$14 per hour per H100 GPU
✔ GCP charges ~$6-7 per hour per H100 GPU
We don’t see this as free market pricing it’s a controlled scarcity model.
Big Tech’s compute monopoly artificially inflates costs, making it impossible for SMEs, researchers, and AI startups to scale.
3. AI Compute Risks
Even if you can afford to use their GPUs, you still aren’t truly in control.
✔ Microsoft’s deep ties to OpenAI mean they decide who gets access to the best models.
✔ The big three will control who gets access to AGI compute and who doesn’t.
Imagine spending millions on AI training, only to have your access revoked because Big Tech doesn’t like your business model.
That’s why independent AI compute providers like Exabits are the only viable alternative.
As AI compute demand skyrockets, the biggest winners will be those who control the infrastructure.
Right now, Big Tech owns most of the AI data centers, but Exabits is positioning itself as the alternative.
✔ High-Performance Compute, Not “Decentralized Hype” - Unlike other “decentralized compute” projects that lack technical credibility, Exabits is enterprise-grade and battle-tested. We offer real-world GPU clusters that meet the needs of high-end AI workloads
✔ Scalability for the Coming AI Boom – As AI demand increases, Exabits is expanding compute capacity across multiple regions, ensuring consistent access for startups and enterprises alike.
✔ Affordable Pricing – While AWS and GCP jack up costs, Exabits provides GPU power at competitive rates; accessible to SMEs, researchers, and AI developers.
✔ Access to High-End GPUs – While Big Tech hoards Nvidia H100s for their own AI models, Exabits ensures startups, enterprises, and researchers get access to world-class compute resources.
Current GPUs Available Through Exabits:
4090, A100, Ade6000, MI50, etc
H100
H200
60,000+
3,000+
4,000+
AI is still in its infancy, but it’s evolving fast.
Today, AI generates text, analyzes data, and automates tasks.
Tomorrow, AI Agents will negotiate deals, run companies, and make autonomous decisions.
The endgame? AGI (Artificial General Intelligence).
AGI will require 1000X more compute than today.
So who controls the next generation of AI?
If it’s Google, Amazon, or Microsoft, they will decide who gets AGI access, and who doesn’t.
If it’s Exabits and independent compute providers, then AI remains open and available to everyone.
That’s why Exabits is building the AI backbone of the future.
If you believe AI’s future should be affordable, accessible, and free from Big Tech control, then Exabits is the answer.