20 Comments
User's avatar
The Human ARC's avatar

What jumps out is how quickly 'managing AIs' becomes another room where we're expected to be coherent and composed before we've built any shared norms for doing that. We're reshaping work structures faster than we're reshaping the stories people tell themselves about their own agency inside those structures. That gap, the curve outpacing the human narrative, is where a lot of the coming whiplash will live. Great insights!

Richard's avatar

I recognize it is early days but customer service chat bots, presumably powered by AI, have proliferated to the point that everyone I deal with as a customer has one. And with a single exception, they all suck. You expect the DMV to suck but Amazon? Why aren't they better?

Mark's avatar

We're building up to your 2027 climax post of "One Useful Thing" aren't we? 🤨

Sam Conniff's avatar

"Uncertainty is not the same as helplessness" Amen. From our research: what paralyses people about AI isn't the tool(s) or hockey sticks, it's the unresolvable consequences. And the antidote isn't expertise or seriousness; it's curiosity. Play turns uncertainty from a threat into a threshold. There's a professional case for fun that most organisations haven't figured out yet, or are tripping up on trying to take all this seriously!

K.Rivers's avatar

I guess there goes Asimov's laws without the blink of an eye.

And what about data and surveillance?

And what about quality of human life when data centers use of water becomes proprietary over human?

Zero human input and zero human oversight.

Dov Jacobson's avatar

I am not really sure what the wise-ass ByteDance Otter meant by that snarky

"Back to the drawing board, Humans!" .

Maybe he feels that we need a little Recursive Self Improvement ourselves? Looking around, it is hard to disagree.

8Lee's avatar

If there was any doubt that AI effects capital markets, well, there it is.

Whether this is real or perceived is missing the larger frame; something has changed and our job is to harness it instead of avoid it.

It’s here. Time to stare into the abyss because it’s already staring back at us.

Chris's avatar

What strikes me is how this LLM revolution is highlighting the worst aspects of the human animal: the commodification of human beings, the acceptance of AI work product that looks right but isn’t (slop), the failure to make distinctions, the lure of “getting rich quick “, the uncritical acceptance of the capability hype ( like doubling human life expectancy in the next five years), the appalling ignorance and gullibility of journalists, and many others.

LLMs have hit a wall on reliability and safety—it’s time for everyone to take a deep breath, and look critically at these things that we built but don’t actually understand.

nihal | deeptech decoded's avatar

The window to be a precedent-setter for AI is real. And it started to feel like it's getting shorter by the day.

Natasha Munsamy's avatar

Your insights are always interesting and helpful. Thank you.

Your descriptions of the Shadow and then the Thing as it gets clearer, reminds me of Stranger Things and the Demogorgon in the beginning. And if we use this window and help shape this thing and the way it can be used for good, we could create an Eleven instead of a Vecna.

Stoic Investor's avatar

Great post summarizing the State of AI.

Gary Grossman's avatar

It does seem like 2026 is shaping up to be a point of no return, where the structures of work and perhaps society will be forever changed by advancing AI.

Jarred Robidoux's avatar

The factory is really interesting. Never heard about this until now

Martin Wilson's avatar

This piece at the end really stuck with me... "That feeling of uncertainty will likely only spread further. But uncertainty is not the same as helplessness. When a technology is this powerful and this unsettled, the choices that individuals and organizations make right now matter more."

My partner and I have been talking to a lot of CTOs and CPOs at existing mid-market B2B SaaS companies.

There's a huge difference right now between those who are leading/taking agency and trying to move through this with a structure and a plan versus those who are waiting.

It's not that those who are taking agency and doing this are having amazing results all the time, but they're building systems that compound over time, internal learning, and the ability to respond to changes.

Marc A/ Meyer's avatar

AI's capabilities are increasing at an exponential rate.

AI skepticism is expressed mainly in the belief that the rate of growth of competence is limited in one way or another, as espoused by, say, Le Cun or Marcus.

But, for those of us working directly with AIs, I think we see surprising instances of a different source of skepticism: the ways in which the unpredictable and unexpected incompetences of AI also manifest in ways that we are aware of but are not measuring.

We don't talk much about this; we emphasize the amazing progress, but there's also a Zeno's Paradox manner in which we don't seem to be actually getting where we wanted to go. More images than ever, but not quite what we wanted. More PR than ever, but more projects which seem to not get past 95% completion, 1000s of lines of code we haven't looked at and can't understand which aren't quite what we wanted. More options, opinions, and demagoguery but less certainty.

Whats the shape of that shadow. Could it also be exponential?

Anyone else get that sense?

Opinion AI's avatar

This really gets the moment right. We are moving from using AI as a helper to managing AI that can do real chunks of work on its own, and that changes jobs, companies, and power much faster than most people are ready for.