superman-arc-gemini.webp

I’m a tech nerd, so I’ve obviously taken a special interest in AI. Far too often, I’m sinking countless hours into building my own systems that leverage the power of various LLMs. These AI models help me run my company, manage my journaling habits, even assist with my Bible study routine. With these tools, I find that digesting information has never been easier or more accessible than it is right now.

But I’d be lying if I said I wasn’t concerned. I’m always wondering about whether the naysayers are right. Is AI dangerous? Is it going to overtake the world? Is it going to crush careers, starting with guys like me who work in tech?

And if you’re watching YouTube videos, reading blog posts, scrolling X — you’re going to find claims that cover the entire gamut. Some people who say AI is totally safe and everyone should adopt it without reservation. Others are reckless with it. They hand over their entire system architecture to AI with no regard for the potential pitfalls. Then there are the extremists — those who claim the economy is going to collapse and that humanity’s end has already begun.

But I’m not making a claim in either direction. My aim is to help you make intelligent decisions about who you’re going to listen to. Because it’s an important subject, and the voices are loud on all sides.


We’ve Seen This Before

No matter where you land, I think we can all agree that the world is changing more than it has ever changed before. I’ve said before that prior to AI, the biggest change in the history of technology was the advent of the internet and mobile communications.

I know a little bit about massive change in the way humans do life. I happen to be part of a very unique generation. We grew up during a time when the internet did not exist, but we were still in middle school and high school when the internet became something that was in every home in the world. Like the baby boomers, we remember riding our bicycles until the sun went down, or having to talk to your friends’ parents if you wanted to call them on the phone. My friends and I joke about the fact that our teenage sons will never know what it’s like to stomach the risk involved in calling a girl from school— you might have to talk to her father, or her older brother, or even her mom, and they might ask you why in the world you’re calling their daughter! We know what it was like to go out and be completely unreachable. Nobody was mad at you when they couldn’t reach you because they assumed you were simply “out.” Jerry Seinfeld captured this perfectly in his hilarious stand-up bit about the “relationship respirator.”

superman-arc-openai.webp

We were also there when all of it changed. We still lived in our parents’ houses when chat rooms became a thing. We watched communication evolve in real time — payphones to flip phones, answering machines to email, appointment TV to streaming — and watched the entire world adapt to it. It was the craziest of times, and it was fun.

For decades, we’ve marveled at the fact that our existence straddles the single biggest shift in human society. In most of our minds, time is marked by it. Oh, that was before the internet. Well, that was after the internet came. Our first mobile devices were no smarter than the phone that hung on the wall in your parents’ kitchen. But man, weren’t we cool with those StarTAC flip phones!

For our generation, it’s been hard to imagine a bigger change to the human experience than those days. But it’s clear to all of us now — we’re no longer approaching the next one. We’re in it. The change is way bigger than the former, and anyone who is being honest will admit that the uncertainty of it is unnerving.


Here’s What I Actually Think

As is the case with all tech development, human beings are both mesmerized and horrified. And some people are telling us the end of the world is upon us — that AI is going to rise up and wipe out humanity within 60 days (give me a break). That might be extreme — but the claims out there are undeniably wild.

So let me tell you where I land.

I personally believe the dangers are real. I’m not going to claim that AI will be the demise of human civilization as we know it. But I’m also not going to pretend there’s nothing to be concerned about. The truth is nuanced. It’s not a binary choice between “AI is fine, relax” and “AI is going to kill us all.” The reality lives somewhere in between, and it’s complicated. Anyone who tells you otherwise is either selling you something or hasn’t thought about it hard enough.

Here’s the most important thing I’ve come to believe: the technology itself is morally neutral.

AI is not inherently good or evil. It has no agenda. It’s an extraordinarily powerful tool — arguably the most powerful tool humanity has ever created — but a tool nonetheless. The question is not what is AI going to do? The real question is what are humans going to do with it?

Mo Gawdat, former Chief Business Officer of Google X, put it this way: “Intelligence is a force with no polarity. You apply it for good and you can get a utopia. You apply it for evil and you’ll get a dystopia.” He uses the analogy of raising Superman — Superman is an alien with superpowers, but here’s what makes him Superman instead of a supervillain — the family that adopted him and taught him to protect and serve. The superpowers are the same either way. The difference is who’s doing the raising.

And right now, a lot of different people are raising this thing. Some of them are brilliant and well-intentioned, but some of them are reckless. Many are driven by profit and power, and that’s what makes this moment so critical — it’s exactly why it matters who you listen to when forming your beliefs about where this is all headed.

Because if the truth is nuanced, you need good sources to navigate it.


Here’s How to Decide Who to Listen To

Here are the guidelines I use when deciding who my dependable sources are. I hope they can help you, too.

1. How recent are their claims?

I recently listened to a fascinating Joe Rogan clip where his guests explain the dangerous capabilities of AI — how jailbreaking works, the bioweapon risks, and various other aspects. To be fair, they weren’t preaching that AI is definitely going to be humanity’s demise. But when you start talking about bioweapon risk and chemical warfare, the feeling the audience walks away with is pretty grim.

recency-section-gemini.webp

Then I took note of the fact that these claims were made nearly three years ago. This was before Claude 3 even existed. At the time of this writing, we’re at Claude Opus 4.6, the most powerful LLM yet. The specific examples they discussed — the TaskRabbit deception story, the “grandma” jailbreak, the Aum Shinrikyo reference — were already well-known talking points when the episode aired, and many of those specific vulnerabilities have since been patched.

Three years is an absolute eternity in the scope of AI. At this point, three months is a long time. You should take that into consideration when evaluating anyone’s claims about this technology. The landscape changes so fast that even well-informed commentary has a shelf life.

2. Do they have real experience, or are they presenting other people’s findings?

There’s a meaningful difference between someone who built AI systems for twelve years inside Google and someone who’s citing a research lab’s findings on a podcast. Both can be valuable. But one of them watched the technology evolve firsthand. They saw the unexpected behaviors. They have intuition that comes from years of direct contact with the thing they’re talking about. The other is a communicator — packaging other people’s work for a broader audience.

Communicators serve an important role, but you should weigh their claims differently than you weigh a builder’s. Ask yourself: has this person actually built something with the technology they’re commenting on? Or are they reporting on what someone else built?

3. Do they give you frameworks, or just examples?

A strong source doesn’t just show you scary demos — they give you a way to think about the problem. A mental model you can apply to situations they didn’t specifically address.

Daniel Miessler is a great example. He’s a security professional with over 25 years of experience, a former security leader at Apple and Robinhood, and he builds AI systems daily. When he argues that AI will replace knowledge work, he doesn’t just say “look how good ChatGPT is.” He lays out what he calls a capability stack — Knowledge, Understanding, Intelligence, Creativity — and shows that AI already matches or exceeds humans on the first three layers. That’s a framework. You can take it and apply it to your own job and ask yourself: which layer am I operating on most of the time?

Compare that to a source that shows you a jailbreak demo and moves on. Once that specific jailbreak is patched, you’ve got nothing left. A good framework keeps working long after the specific examples expire.

4. Do they offer solutions, or just point out the scary stuff?

It’s easy to scare people about AI. The technology is genuinely powerful and the implications are legitimately serious. But pointing at the fire without picking up a hose is not helpful. Pay attention to whether a source is invested in what to do about it — not just in getting your attention by telling you how bad things could get.

The best voices in this space acknowledge the risks and then pivot to action. What should individuals do? What should companies do? What should policymakers consider? If someone spends all their time describing the problem and none of their time on solutions, that tells you something about their intent.

5. Are they willing to follow the logic to uncomfortable places?

A lot of AI commentators will walk you right up to an uncomfortable conclusion and then pull back — because the honest answer would alienate part of their audience.

Miessler is a good example here, too. He flat-out says: “The ideal number of human employees for a company is zero.” That’s not a comfortable thing to tell an audience full of employees. But he says it because the logic supports it, and then he spends an hour walking through the evidence, acknowledging the human cost, and offering an optimistic vision for what comes after. He could soften that claim to keep everyone comfortable. He doesn’t.

When someone follows their reasoning to a conclusion that might cost them followers and says it anyway, that’s a strong signal you’re getting honesty rather than performance.

6. What are they incentivized to tell you?

This is a big one. Ask yourself: does this person’s career, business, or platform benefit from scaring you? Or from hyping you? Fear drives clicks. Hype sells courses. Both are powerful incentives that can warp even well-intentioned commentary.

Now, this doesn’t mean someone who sells AI courses is automatically untrustworthy. But it does mean you should apply extra scrutiny. Is their business model dependent on you believing a specific narrative? Or does their work stand on its own regardless?

Some of the most credible voices in this space give away their frameworks and tools for free — they open-source their work. That doesn’t guarantee honesty, but it significantly reduces the “selling you a narrative” incentive.

7. Do they acknowledge what they don’t know?

The most dangerous sources are the ones who are 100% certain about everything. AI is genuinely uncertain territory. The technology is evolving faster than anyone can track with total confidence. Anyone who speaks with absolute certainty in either direction — “it’s totally fine, relax” or “we’re all going to die” — is either selling you something or hasn’t thought about it deeply enough.

humility-whiteboard.webp

The credible ones hold real debates with people who disagree. They say things like “here’s what I think, and here’s where I could be wrong.” That intellectual humility is a sign of strength, not weakness. It means they’ve actually wrestled with the complexity instead of just picking a side.

8. How big is their platform versus how deep is their expertise?

A million YouTube subscribers does not equal a million IQ points. In the AI space especially, some of the deepest thinkers have modest audiences, and some of the loudest voices have shallow understanding. Don’t confuse reach with depth.

This isn’t elitism — large audiences can absolutely coexist with genuine expertise. But when you’re evaluating a claim about the future of AI, ask yourself: is this person credible because they know things, or because they’re good at getting attention? Those aren’t always the same.

One More Thing — And This One’s About Me

I want to practice what I preach, so let me apply this framework to myself.

I build with AI every single day. I use it to run my business, manage my personal systems, digest information, and enhance nearly every aspect of my professional life. That means I have a bias. I’m inclined to see AI as useful because it is useful to me. I have skin in the game on the optimistic side.

You should factor that in when you weigh anything I say about this technology. I’m not a researcher. I’m not a policymaker. I’m a tech professional who builds with this stuff and thinks carefully about where it’s all going. I’ve tried to give you honest guidelines for evaluating sources — and that includes evaluating me.


The world is changing faster than at any point in human history. The voices telling you what to think about it are louder and more numerous than ever. My hope is that these guidelines give you something practical — a set of filters you can apply to the next YouTube video, blog post, or X thread that crosses your feed.

The truth, as usual, lives somewhere in between. Be thoughtful about who helps you find it.


Sources


Author

joe-selfie.jpg

Joe Cox runs an IT and Cybersecurity company that helps businesses strengthen and adapt their security postures while keeping system uptime optimal. Off the clock he’s a pianist, a Bible student, a husband, and a dad. He writes to think out loud — mostly about technology, faith, and the places the two keep bumping into each other.