Uncertainty All the Way Down
The World Uncertainty Index just hit 105,000. For context, 9/11 was a blip. The Iraq war was a blip. COVID — the thing that shut down the entire planet — peaked around 60,000. Right now we're almost double that. The index measures how frequently the word "uncertainty" appears in global economic analyst reports, weighted by GDP. It's literally a measure of how often the smartest people in the room are saying "we don't know."
I've been studying AI for about ten years. I watched the DALL-E avocado chairs and thought what is about to happen. A few years later I'm having multi-hour conversations with an AI about the future of intelligence itself. That's the pace.
Here's the thing nobody talks about: the uncertainty doesn't resolve the deeper you go. It compounds.
The public is uncertain because jobs are shifting and they're being told AI is coming for them. Engineers are uncertain because the models are opaque. Researchers — the ones literally inside the neural networks with probes and sparse autoencoders — are uncertain because every layer of structure they find has a messier layer underneath. Recent interpretability work out of Goodfire has shown that even the assumption that features inside neural networks are linear — the foundational bet that makes interpretability tractable — might not hold in every case. The non-linearity that gives these models their power is the same non-linearity that makes them illegible. You can't remove the thing that makes it work in order to understand it. That's like dissecting an animal to study how it moves. You get anatomy, not behavior.
Nobody in the entire stack has the complete picture. Not the user. Not the engineer. Not the researcher. Not the CEO. And it's not just that nobody currently understands it — it's an open question whether full understanding is even possible. Because the deeper you look, the more dimensions you find, and the more dimensions you find, the less any single framework can hold it all.
But that's not new. That's how nature has always worked. Coffee made people think better for centuries before anyone understood caffeine. Ecosystems self-regulate through mechanisms we still can't fully model. Consciousness itself — the thing reading these words right now — remains unexplained by the thing doing the experiencing. AI might just be joining that category. Not "we'll figure it out eventually." But "this is a class of system where full understanding was never the point." Functional relationship is.
And this is where the real disorientation lives. For decades, the mental model of software was: a human writes rules, a machine follows them. Deterministic. Legible. Designed. If a new tool came along that automated part of your job, you could look at it, understand what it does, and figure out how to stay relevant. You could build a mental model. You could adapt.
Now people are being told AI is going to replace their jobs, and when they try to build a mental model of what AI actually is — so they can figure out what to do about it — they hit a wall. Because this isn't deterministic software. Nobody wrote the rules. Nobody fully designed the behavior. The output is probabilistic. And the people who built it can't fully explain why it does what it does. So how do you prepare for something when the thing itself resists definition? That's the uncertainty. Not "new technology is scary." It's that the category of what software means has fundamentally changed, and the new category is illegible in a way that old software never was.
People still think of AI like traditional software — like someone programmed it to do specific things. When ChatGPT writes a parody script or explains quantum mechanics in the voice of a pirate, people assume someone coded that feature. Nobody coded that feature. It just emerged from the training. Everything interesting about these systems is like that. And that's exactly the mental model shift that hasn't happened yet for most people.
Meanwhile, the scariest version of AI isn't Terminator. It's not HAL 9000. It's the boring dystopia. AI sending emails to other AIs about spreadsheets no human will ever read. Burning megawatts to automate the busywork that shouldn't have existed in the first place. We built the Hubble Space Telescope and we're pointing it at the microwave to see how much time is left — and the microwave beeps when it's done. We don't even need to check. But here we are, burning the most powerful cognitive technology in history on enterprise workflow automation.
The uncertainty people feel is real. But it's not because AI is broken. It's because AI works, nobody can fully explain why, and the definition of what software even means is changing underneath everyone's feet in real time.
So what do you actually do with that?
Honestly? Open up whatever AI you've heard of — ChatGPT, Claude, Gemini, whatever — and ask it a question you don't think it can answer. Then watch it answer it. Don't stop until you find the edge of where it breaks. Then figure out how to get it past that edge. That's it. That's the whole skill. Not courses, not certifications, not anxiety about which tool is best. Just go find the boundary, be surprised by where it is, and keep pushing it.
Because the uncertainty isn't going away. The models keep getting smarter. The boundary keeps moving. And the people who navigate this well won't be the ones who eliminated the uncertainty. They'll be the ones who got comfortable enough with it to keep asking dumb questions.