- The VA Group
- Posts
- The $200 Billion Question
The $200 Billion Question
The Financial Future Of AI

BEYOND VIRTUAL
You know that friend who keeps ordering bottle service when their credit card's maxed? That's OpenAI right now. But instead of expensive champagne, they're buying server farms that could power Florida.
HSBC just dropped a financial analysis that should make every business leader pause. The numbers are... let's call them "wildly unsustainable."
Here’s the logic: They need to achieve perfect AGI (Artificial General Intelligence) before the bank account hits zero.
If they achieve a model that can out-think humans, the debt won't matter because they’ll own the keys to the future economy. But if they hit a plateau, say if GPT-5 or 6 only offers "incremental" gains instead of "beyond human" intelligence, the investors currently footing the bill might finally ask for their change back.
Feature Story
The AI Security Blind Spot We Might Be Missing

The good old times when everyone referred to Wikipedia for “reliable” information, and then came a time when Wikipedia was the site our teachers told us not to cite. Now it contributes to teaching AI how to think.
While Wikipedia represents less than 5% of training data, it handles nearly half of the real-time citations AI provides. This trust is the vulnerability.
The "250-Document" Poisoning
Here is where it gets messy. We used to think you had to corrupt a huge percentage of the internet to "trick" an AI. We were wrong.
A 2025 study by Anthropic and the Alan Turing Institute found that it takes just 250 malicious documents to poison an LLM, regardless of whether it was trained on billions or trillions of tokens. Let me say that differently: 250 bad pages in a sea of trillions can corrupt an entire AI model.
Think about how easy it would be to host 250 obscure blogs or PDF files on the open web.
While Wikipedia’s "anyone-can-edit" model is a target, it’s actually the hardest place to attack because human editors and bots delete vandalism in seconds. The real vulnerability is the rest of the internet. AI models scrape everything, and if an attacker hides 250 "poisoned" pages on the unpoliced corners of the web, your AI might be trained on them right now without anyone even noticing.
The Loop That Breaks Everything
MIT Technology Review documented what they call a "linguistic doom loop." AI translators trained on mediocre Wikipedia articles create worse translations, and those translations get added back to Wikipedia. This cycle forces new models to train on their own flawed data, A.K.A Model Collapse.
Garbage in, garbage out, amplified at machine scale.
In April 2025, NewsGuard found that leading chatbots repeated Russian disinformation about France 33% of the time. Through "AI Grooming," attackers flood the web with thousands of fake articles to ensure AI scrapers ingest their narrative as the "majority" view.
And their strategy is simple, too. Pollute the data well, and the AI poisons itself. Repeat the cycle.
Read the detailed research here: https://www.anthropic.com/research/small-samples-poison
Visionary Voices
HSBC Says OpenAI Can't Actually Afford To Sustain Themselves

When investment banks start running the numbers on your business model and come back with "yeah, this doesn't work," it's worth paying attention.
HSBC Global Investment Research analyst Nicolas Cote-Colisson just said what everyone's been thinking, but nobody wanted to say. OpenAI faces a $207 billion funding gap by 2030. Their compute costs are projected to hit $1.4 trillion by 2033. This is also known as the AI megacycle.
$1.4 trillion is not a business plan we’d want, wouldn’t we? It’s what we call a bet-the-farm gamble on an infinite scale, somehow fixing infinite losses.
And the more worrying part is, even if OpenAI reaches 44% of the world's adult population by 2030, they'll still be bleeding cash. Half the planet could be using ChatGPT, and they still can't pay their bills.
Meanwhile, In A Different Universe
Anthropic is projected to break even by 2028. OpenAI's losses that same year? $74 billion. Same industry and same technology, but completely different outcomes.
What's the difference? Anthropic went boring and focused on enterprise customers who actually pay for things. They built tools that save companies measurable money. No viral TikToks about it and no consumer hype cycles. Just relentless focus on "does this thing make our customers more profitable?"
The results speak for themselves. Claude holds 42% market share in coding versus OpenAI's 21%. For overall corporate AI use, it's 32% versus OpenAI's 25%.
And my favorite part, Microsoft, OpenAI's biggest investor and partner, recently added Claude to Copilot. Even OpenAI's own backers admitted someone else built something better for actual work.
Why This Should Change Your Strategy
This tech gossip is a great roadmap for what works and what doesn't in the AI world.
If the company that invented ChatGPT can't figure out how to make money from it, what makes you think your AI implementation will be different?
The lesson isn't "don't use AI." It's "don't use AI like OpenAI uses AI."
Being first doesn't matter. Having the most users doesn't matter. What matters is solving specific problems that generate returns faster than they burn cash.
The Trend
2026: The Year "Trust The AI" Becomes Career Suicide

Blindly trusting AI output has shifted from a "productivity hack" to a significant corporate risk. As we progress in 2026, the industry is moving away from general-purpose automation and toward rigorous Human-in-the-Loop (HITL) verification.
The companies leading this year aren't those with the largest compute budgets, but the ones that have implemented AI Governance to manage three specific 2025 breakthroughs:
The Death of Implicit Trust - We now know that Wikipedia and Reddit, while useful for search, are primary targets for "AI Grooming." Relying on AI-generated research without independent verification now exposes companies to laundered disinformation and legal liability.
Monitoring the “Linguistic Doom Loop” - Businesses are already seeing the first real effects of the Linguistic Doom Loop. Without expert oversight to identify Model Drift and Data Rot, companies risk publishing “AI Slop” that degrades their brand authority and alienates users who are increasingly sensitive to machine-generated patterns.
Choosing Stability As The Strategy - The 2026 market has split. On one side are companies chasing “AGI” hype and burning billions in venture capital. On the other are companies adopting Agentic AI. Surviving the AI Bubble requires the ability to distinguish between a model that sounds smart and an accurate system.
The bottom line is that AI is no longer a pilot project, but a core utility. But just like any utility, it requires a Safety Engineer. Whether through specialized staff or dedicated oversight, the goal is the same: Ensuring your AI creates value rather than compounding risk.
A FINAL NOTE
We are watching two versions of the future collide. One bets $1.4 trillion on the hope that infinite scale eventually equals profit. The other, led by efficiency, focuses on solving specific problems today for people willing to pay.
Blind trust has become a corporate liability. In a world where 250 documents can poison a model and AI "slop" is eroding the very data models train on, the quality crisis is no longer plateauing; rather, it is entering an exponential decay.
The most valuable person in your company this year won't be the one who prompts the AI, but the one who audits it. In a sea of machine-generated noise, human-verified insight is the only currency that still clears the bank.
Until next time,
