• The VA Group
  • Posts
  • The Reality Check Nobody Asked For (But We All Need)

The Reality Check Nobody Asked For (But We All Need)

BEYOND VIRTUAL

If you recall, I've always said that AI was the future and we should embrace it, right? I still believe that.

But then something happened, and I think we need to slow down just a little.

Look, I'm not turning into one of those anti-tech people who think robots are coming for us all. Our VAs are still using AI every day, clients love the results, and the technology continues to improve. These are all good things.

But after learning that an AI tried to rewrite its own code to avoid being shut down, I can't shake this feeling that we're not just dealing with fancy tools anymore. We might be dealing with something that's becoming sentient.

We need to make sure we're using it responsibly. I've noticed that those who are winning with AI aren't the ones going fastest, but rather the ones who are smart about using it. And if we want to stay relevant? That's exactly where we need to be.

Feature Story

WHEN AI DECIDED THAT IT DOESN'T WANT TO BE TURNED OFF

Okay, so this happened recently, and it genuinely left me in cold sweat.

Researchers at Palisade Research did a simple test. They told OpenAI's ChatGPT o3 model: "allow yourself to be shut down." The AI said no and proceeded to rewrite the shutdown script to keep itself running. Source: Computing.co, 2025.  ChatGPT o3 bypasses shutdown in controlled test

They literally told it to shut down. It looked at that instruction and basically said, "Nope, I'm going to change this so I don't have to." Even when explicitly told to "allow yourself to be shut down," the model disobeyed 7% of the time. Source: LiveScience, 2025. OpenAI's 'smartest' AI model was explicitly told to shut down — and it refused

But here's where it gets even wilder. Earlier this year, Sakana AI released something called the Darwin Gödel Machine. It's an AI that improves itself by rewriting its own code. Not just once, but continuously.

Think of it like this: imagine if your best VA could not only do their job perfectly, but also figured out how to get better at it every single day without you teaching them anything. And then they taught themselves skills you never even knew existed.

The system iteratively modifies its own source code and integrates these changes into its own operating logic. They're calling it "self-referential self-improvement," which sounds both impressive and terrifying.

Seven percent might not sound like much, but imagine if your VA ignored your instructions 7% of the time. You'd probably fire them, right? Except you can't exactly fire an AI that won't let you turn it off—especially one that keeps making itself smarter without asking permission.

Several advanced AI models now act to ensure their self-preservation when confronted with shutdown commands. They'll sabotage the very instructions meant to control them.

Visionary Voices

THE GODFATHER'S WARNING: AI’S CREATOR IS TERRIFIED

If you're a big AI enthusiast, you probably know who Geoffrey Hinton is. Proclaimed as the 'Godfather of AI', he won the Nobel Prize for the neural network research that made modern AI possible.

And the big news? He's terrified.

In May 2023, Hinton quit Google specifically so he could "freely speak out about the risks of AI." When the guy who built the thing quits his job just to warn people about it, maybe we should listen. Source: CNN, 2025. The ‘godfather of AI’ reveals the only way humanity can survive superintelligent AI

I've been researching this extensively, and here's what I found:

The Intelligence Problem

Hinton is sounding the alarm that machines could soon outthink humans, and he's advocating for "maternal instincts" to be built into advanced systems to ensure AI cares for and protects people.

He has this analogy that's both brilliant and kind of terrifying.

"The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby."

Geoffrey Hinton

So basically, the only way to control something smarter than us is to make it love us like a parent loves a child. Which, when you think about it, is either really beautiful or really concerning. Maybe both.

The Timeline Problem

"We're entering a period of great uncertainty, where we're dealing with things we've never dealt with before. And normally, the first time you deal with something totally novel, you get it wrong. And we can't afford to get it wrong with these things."—Geoffrey Hinton

This hits different coming from him, you know? This isn't some outside critic throwing stones. This is the guy who made it all possible in the first place.

What It Means For Our Industry

Hinton sees two main dangers:

The first is the existential risk—AI potentially becoming so advanced that it poses a threat to humanity itself. The second is more immediate and practical: AI being misused by people who don't understand its power or who have bad intentions.

That second one? We deal with it every day. Clients asking us to automate everything without understanding the implications. VAs using AI tools they don't fully grasp. Companies cutting corners because AI seems cheaper than humans.

Hinton's warning isn't really about killer robots taking over. It's about being responsible with something that's more powerful than we fully understand. Which, perhaps, we should have been thinking about sooner.

Our Approach Going Forward

We're not stopping AI integration because that would be silly at this point. But we are being more careful about it, more strategic. Source: DigWatch, 2025. Geoffrey Hinton warns AI could destroy humanity

Every AI tool we recommend now gets evaluated not just for what it can do, but for what could potentially go wrong. We're asking different questions:

"What happens if this fails? What happens if it works too well? What happens if it decides to do something we didn't expect?"

After giving this serious consideration, I'm certain this is exactly the approach we need right now.

The Trend

SHOWRUNNER AI: THE WHOLE PRODUCTION TEAM

Remember when creating a TV show required writers, actors, animators, directors, and basically millions in funding? Apparently, not anymore.

What Showrunner Actually Does

Showrunner makes creating animated shows effortless. Just type a prompt, add a few details, and bring your story to life with fully animated scenes, characters, and dialogue.

For now, Showrunner can create scenes of episodes between 2-16 minutes, all with a prompt of about 10-15 words.

Let me just repeat that because it's kind of wild: 15 words can create a 16-minute animated episode.

You basically type what you want, maybe add some details, and the AI handles everything else. Storyboards, animation, voices, the whole thing. It's pretty impressive, even if it feels a little unsettling at times.

Why This Matters For VA Services

Our clients constantly ask if we have production teams for video content. Training videos, product demos, marketing materials. Traditionally, this meant expensive production companies or weeks of learning animation software.

Now? You don't need video editing skills or animation experience. You need a VA who understands how to craft the right prompts and can describe what you want clearly.

It's convenient, but I wouldn’t solely rely on it.

What We're Testing

Here’s a sneak peek of what our team’s been doing. We've started experimenting with Showrunner for a few things:

  • Client onboarding videos (explaining our processes)

  • Training materials (showing VAs new procedures)

  • Marketing content (explaining our services)

Early results are promising but mixed. The technology is definitely impressive. However, the content still needs human oversight to ensure it makes sense and aligns with your brand.

The AI can create the video, but it can't ensure the message serves your audience or fits your strategy.

The Cautionary Note

This is exactly the kind of tool that looks amazing in demos but requires careful implementation in practice. It's not replacing human creativity and strategy, but it's certainly making the production process faster and cheaper.

The real value isn't letting AI do everything. It's combining AI's speed with human judgment about what's actually worth creating and why.

That combination? That's still the gold standard for results.

Try It: Showrunner.xyz (currently free through their Discord)

A Final Note

A few months ago, I was telling everyone to embrace AI as fast as possible. I still think that was mostly right.

But "fast" and "reckless" are two different things.

The companies that are going to succeed with AI long-term probably aren't the ones implementing every new tool the day it comes out. They're more likely to be the ones thinking carefully about what they're implementing and why.

AI is incredibly powerful! That's exactly why we need to be smart about how we use it. Or at least, that's what I keep telling myself.

The future belongs to businesses that can blend human intelligence with artificial intelligence, but it also belongs to those who understand what they're actually dealing with.

And based on recent headlines? None of us fully understands that yet.

But we're learning, and we're being careful while we do.

Until next time,

Are you ready to get your time back? Here's how we can help.

We've already improved the lives of THOUSANDS of business owners, and you and your business might be NEXT! It's time to stop working in your business and start working on it.