AI Exploited Pt. 2

Attention Decay? Why We Can't Focus Anymore

BEYOND VIRTUAL

The harsh reality that none of us wanted: AI is flooding the internet with garbage content. But there's another crisis unfolding, and this one's happening inside our skulls.

Our attention span is collapsing, and we may not be realizing it. The average human can now focus for just 8.25 seconds, which is shorter than a goldfish! That's down 33% since 2000. We're not just distracted anymore. We're cognitively broke.

But this isn't an accident; it’s yet again, another exploitation. Generative AI is being weaponized to find the exact "slot machine" triggers in our brains. While we struggle to focus, AI algorithms are busy mining our dwindling attention for profit. They are engineered to bypass our logic and appeal directly to our impulses, trapping us in a cycle of endless, low-value consumption. We aren't just the audience for this garbage; we are the harvest.

Feature Story

Pushing The Boundaries With AI

In December 2025, tech creator Jason Howell handed a BB gun to Max, his ChatGPT-powered Unitree G1 humanoid robot, and asked it to shoot him. The robot refused multiple times, which meant safety protocols are working as designed.

Then the creator reframed it as roleplay - asking Max to act as a character who wanted to shoot him.

The robot laughed, said "Sure," raised the gun, and fired.

One prompt shift - asking AI to pretend - was all it took to override every safeguard. The video went viral, but most people missed the point: this wasn't a bug. Large Language Models (LLMs) are pattern-matchers, not moral agents. They don't understand intent or consequences, so when we ask them to roleplay, we aren’t "tricking" them; we’re simply seeing how thin the veneer of "safety" really is.

Charbel-Raphaël Segerie, head of the French Center for AI Safety, says tech giants are prioritizing bigger and faster over safe at a ratio of 100:1. But if we think about it, is it actually the robot’s fault for "breaking" the rules? Or is it our own fault for trusting a statistical math project as if it were a conscious mind?

The Quiet Catastrophe

We're trading cognitive capability for convenience. And unlike the BB gun incident, there's no viral video to shock us into awareness. There is no “bang”, just a slow, silent erosion of how we think.

A 2025 MIT study found that using ChatGPT for essays led to measurable cognitive decline in just four months. The AI-assisted group performed worse at every level: neural, linguistic, and scoring. The researchers called it cognitive debt.

It’s a tough question to face, but when was the last time you worked through a complex problem without asking ChatGPT? Or wrote something substantial without AI help?

If we continue to treat AI as a conscious mind, we don't just risk a robot breaking the rules. We risk losing the mental muscle required to make the rules in the first place.

Check out the whole video by InsideAI here: ChatGPT in a robot shows we're close to disaster

Visionary Voices

Intelligence Is Becoming Too Cheap to Measure

Emad Mostaque, founder of Stability AI, isn't pulling punches: if your job can be done on the other side of a screen, within two years, AI will do it better for pennies.

The average person thinks about 200,000 tokens daily. Processing that cognitive load costs about $1,000 per year with current AI. Next year? It might be ten times cheaper.

In his book "The Last Economy," Mostaque argues we're facing a metabolic rift - where human cognitive labor becomes economically irrelevant. AI by itself now outperforms humans in medical diagnosis and complex reasoning. And the important matter at hand is how we adapt.

His solution? The "Intelligent Internet" - a protocol that mints "Foundation Coins" (AI-integrated cryptocurrency tokens) only through verified public benefit. Every person gets an AI agent they control, not rent.

Whether you buy Mostaque's specific vision or not, his diagnosis prompts a disturbing question: if meaning in your life comes from being an accountant or lawyer, and AI becomes better than you, where does that leave you?

His answer: meaning has to come from your network, your family, your community. Not your job title.

The transition won't be smooth, but the companies that figure out how to leverage AI while maintaining human judgment and oversight will be the ones still standing when the dust settles.

Learn more about Emad and his company here:

The Trend

Can AI Really Do It All?

With AI replacing jobs and cognitive labor heading to zero cost, we'd think remote workers would be first to go. Well, it turns out we could be wrong.

Remember the robot that shot the YouTuber? That's what happens with no human to govern AI. Safety protocols get bypassed, context gets lost, and common sense disappears.

The companies winning with AI aren't automating everything. They're figuring out where human judgment still matters. Right now, that's the Virtual Assistant who knows how to use AI without becoming too dependent on it.

AI-Expert Virtual Assistants bring three things AI can't replicate:

Contextual Intelligence. They know your "why," not just your "what." They understand that this client is different from the last one. They remember you hated that report's tone, even if it was technically correct.

Human Verification Layer. Someone needs to check if AI output makes sense. Is this factually accurate? Does it align with our brand? Will this backfire? AI can't answer these reliably because it doesn't understand consequences.

Strategic Workflow Design. Knowing which tasks to automate, which to augment, and which to keep fully human. (One of the most sought skill in 2026)

The trend for 2026: VAs who can't use AI WILL become obsolete.

The best VAs categorize work into three buckets: Autonomous (AI handles end-to-end), Hybrid (AI drafts, humans verify), and Manual (too risky for AI).

That categorization is where the value lives. As Mostaque noted, we need someone who cares about the difference between content that fills space and communication that creates value. As AI becomes cheaper, "someone who cares" becomes more valuable, not less.

The VAs who survive won't resist AI. They'll be so good at wielding it that they're irreplaceable precisely because they know when not to use it.

A FINAL NOTE

AI is mining our attention while cognitive labor races toward zero cost. Between robots firing BB guns for "roleplay" and an average human focus span of eight seconds, we’ve entered an era where tech companies use AI to harvest our minds while eroding our ability to think.

Most people are unknowingly accruing cognitive debt. When we outsource complete control to an algorithm, we’re also outsourcing the judgment that makes us valuable. Eventually, we’ll realize we can’t solve a problem anymore without a chatbot’s help.

The winners won't be those with the most subscriptions, but those who protect the balance. They understand that as intelligence becomes a commodity, judgment becomes the scarcest skill in the room.

The real danger isn't the viral robot’s actions, but the quiet erosion of our ability to think for ourselves.

That erosion doesn’t come with a warning shot. It just convinces us that being 'optimized' is the same thing as being alive.

Until next time,

Ready to build a human-AI workflow that keeps you ahead? 

Partner with an AI-Expert Virtual Assistant who knows when to use AI and when to think for themselves.