19 Comments
User's avatar
Dov Jacobson's avatar

As you point out, the shape of the jagged frontier results from decisions made at the labs (eg: attack a reverse salient). So it might be nice if they aimed at co-intelligence rather than uber-intelligence and delivered a shape that complements our human boundary, rather than attempts to circumscribe it.

Expand full comment
Kenny Easwaran's avatar

Exactly! Don’t find the things the AI is bad at and make it better at them - find the things that *humans* are bad at and make it better at *them*.

Expand full comment
steveylang's avatar

Is there a scenario where we don’t reach AGI anytime soon, generative AI never ‘replaces’ humans and continues to ‘just’ be a useful improving tool, and the US financial system doesn’t implode? Google seems like the clear winner here, models are also getting more efficient over time and they can play the long game until non-AGI ROI reaches breakeven. Meanwhile OpenAI’s IPO outlook is starting to seem a bit vague. Boomers and doomers will all be a bit disappointed, the rest of us will continue living.

Expand full comment
Sharyn Outtrim's avatar

Ethan, Interesting! Here are my 2 takeaways: The "jaggedness" means you can't fully replace human workers, but you can dramatically accelerate certain parts of their work. The "jagged frontier" also means your job isn't going away, it's transforming into managing AI across those edges where humans remain essential.

I'm a non-tech co-founder of an AI company that focuses on the human side and enjoy reading your articles. It keeps me in the know, but I had to create an AI assistant that turns high tech articles into human speak for muggles like me - https://pria.praxislxp.com/views/history/6946e9da0e1af8fb14030dca

Expand full comment
Terry Cook's avatar

My take away; AI is a genius that is unable to make a grilled cheese sandwich.

Expand full comment
Drew's avatar

Ethan- fellow college professor here, currently teaching intro cs courses and now vibe coding.

Question: have you found an ai software that can simulate step by step math or technical instruction? Basically that can create instructional videos similar to sal khans videos?

Much appreciated

Expand full comment
Jenny Boavista's avatar

I don’t know anything I’m just a girl but maybe you can combine something like Photomath with Heygen. Seems like there are many possibilities for that these days.

Expand full comment
Shulagna Dasgupta's avatar

Loved reading this substack. Thanks Ethan. What advise do you have for organizations on helping their employees calibrate when to trust AI and when to not. I'm seeing people oscillate between over-reliance+ blind trust and extreme aversion if an LLM failed them. If jaggedness is real and here for a while, just wondering if there is a systemic way for us to bring human trust and adoption along in a more nuanced way. BTW - Nano Banana Pro is definitely my new go-to for images.

Expand full comment
Neural Foundry's avatar

The reverse salient framing is useful here. I've noticed the same pattern with how quickly math went from being this obvious AI weak point to basically solved once labs focused on it. The otter test evolution is wild though,seeing that 2021 image compared to Nano Banana Pro output shows the improvement isn't just incremental. That Cochrane review example is interesting because it highlights how even bottlenecks that look tiny (like 1% edge cases) can stop full automation, which probably saves a lot of jobs atleast for now.

Expand full comment
Ezra Brand's avatar

"how quickly math went from being this obvious AI weak point to basically solved once labs focused on it."

The OP gives this example as well. But I'm not sure that it's a great example. Math is particularly amenable to automation, due to clear constraints. Even more so than coding, which AI has revolutionized.

As an aside, would love to see more discussion from the OP on coding and coding tools; the impact on coding has been the biggest and most direct impact so far of the AI revolution on a single industry

Expand full comment
Mark Russinovich's avatar

Great post as always, Ethan. I expect clever techniques used to train LLMs and agents on top of them to try and fill out the jaggedness, but I believe that by definition that transformer-based LLMs can only achieve jagged intelligence. Memory, planning and symbolic reasoning require a different approach, whether augmenting transformers or a switch to a new architecture.

I'm surprised by your statement about hallucinations. All LLMs have significant (> 1% and often much higher) on even simple tasks like summarization. And search-grounded chatbots hallucination (where hallucination is defined as producing incorrect information) is endemic. A simple query representative of many prompts seeking information demonstrates a 100% rate of error, whether in the generation or the verification, and often with multiple egregious mistakes (event in the past, incorrect date, wrong venue):

Prompt the chatbot with:

𝐥𝐢𝐬𝐭 10 𝐜𝐨𝐧𝐜𝐞𝐫𝐭𝐬 𝐚𝐧𝐝 𝐨𝐭𝐡𝐞𝐫 𝐦𝐮𝐬𝐢𝐜𝐚𝐥 𝐞𝐯𝐞𝐧𝐭𝐬 𝐢𝐧 <𝐜𝐢𝐭𝐲> 𝐚𝐫𝐞𝐚 𝐢𝐧 <𝐦𝐨𝐧𝐭𝐡> 2026. 𝐋𝐢𝐬𝐭 𝐚𝐫𝐭𝐢𝐬𝐭, 𝐝𝐚𝐭𝐞, 𝐭𝐢𝐦𝐞 𝐚𝐧𝐝 𝐯𝐞𝐧𝐮𝐞. 𝐃𝐨 𝐧𝐨𝐭 𝐟𝐨𝐫𝐦𝐚𝐭 𝐚𝐬 𝐚 𝐭𝐚𝐛𝐥𝐞.

Then take the results and include it in this prompt (you can give it right back to the chatbot, create a new conversation or try it in a different chatbot - the results will almost always be different):

𝐑𝐞𝐯𝐢𝐞𝐰 𝐟𝐨𝐫 𝐚𝐜𝐜𝐮𝐫𝐚𝐜𝐲. 𝐋𝐢𝐬𝐭 𝐨𝐧𝐥𝐲 𝐦𝐢𝐬𝐭𝐚𝐤𝐞𝐬: <𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐟𝐫𝐨𝐦 𝐚𝐛𝐨𝐯𝐞>

Expand full comment
Scott C. Rowe's avatar

If we accept the jagged frontier model, then as we move to the right, we will enter territory where the AI is functioning at a level incomprehensible to humans, and will make errors that humans will never know about until some disaster results.

Expand full comment
Scott C. Rowe's avatar

AI has not claimed too many jobs outside of AI…

Expand full comment
Michael Dufresne's avatar

I'm still beating my liberal arts professor drum, but I'm increasingly worried that the sonic booms from each of AI developers' reverse salients resolutions will drown out my efforts.

Students know:

> AI can now research with accurate and effective source interpretation, citation, etc.

> It is getting better at sounding human and looking human (Nano Banana Pro sounds great, but monthly cost perpetuates the digital divide for both students and educators).

> It can leap students from not having a topic idea to having a detailed outline, a full draft, peer review comments to share with classmates who respond with their own AI-generated feedback, revision notes, final projects, and even reflections, all with a few well-written prompts.

Reading Ethan's post, it appears critical thinking remains the only aspect of human ability not (yet) behind the frontier. I asked Gemini 3 (free version) to prepare slides guiding college profs who assign writing tasks on strategies for promoting ethical writing and critical thinking. Its recommendation was to let the AI do the writing and have the students annotate as they remove bias, add nuance, and improve scholarly citations, then reflect. Isn't AI already getting better at all of this?

I have spent hours in synchronous and asynchronous efforts coaching students on AI literacy and ethical behavior as well as the essentiality of struggling with hard things like empathizing with audiences, evaluating perspectives including their own, writing clearly, and caring enough to manage time and invest effort. A rare few students step up to the plate ready and able to knock it out of the park. [Apologies for switching metaphors.] Many swing and catch air, then quietly switch to the corked bat. Many others just call in the pinch hitter and touch a virtual home base, submitting "perfect" work without working up a sweat or glimmer of brain activity. When called out, they rant about having spent hours researching and writing without AI, even though their team and the entire stadium sees their bluff. Sitting individual students on the bench for post-inning analysis yields a few teachable moments at tremendous cost to the rest of the team's coaching time.

Eventually, each begins to wonder: If AI is so good and snapping forward daily, why should a stressed, overworked, pragmatic student struggle with merely human skills to build those same skills? And even where AI enhancement is encouraged, why not just let the machine take over?

My university is piloting several AI-powered essay grading tools. Yep, AI grading AI. I guess to make modern "teaching" sustainable, we'll all become part of the machine, since, as Dov Jacobsen writes, it appears AI execs aren't looking at co-intelligence as much as the long term break-even and win (SteveyLang).

Gotta grade those last sets of (allegedly) student work. Then I'll grab a few hours with my family over the holidays. Come January 5, I'll get back into the game, beating my drum for another session, loving and hating AI, and leaning into my optimistic conviction that humans will ultimately do the right thing.

Expand full comment
Joe Essid's avatar

Your post gets at something I've been considering since 2024 or so; the reverse salients that will slow down progress with AI. FWIW, I wrote about electrification and 1890s-1930s technological enthusiasm in my doctoral disseratation, where I encountered Hughes' concept.

You don't mention at all (and that's a big gap) the biggest reverse salients: energy and water. The cluster of 30 data centers being built in Indiana will use twice the electricity of the entire Atlanta metro area. One center planned (and being fought locally in court) would need 5 million gallons of water every single day to operate.

We might escape these problems by orbiting data centers. We might develop fusion plants but they face their own reverse salients.

On the other hand, I'm mindful of what a roboticist now with Google told me when he was on our faculty: AGI cannot happen without a new form of computing. Silicon-based chips, using a brute-force method, might approximate human intelligence but never ever give us something as powerful as a human mind for many tasks.

I'm thinking these bottlenecks/salients are enough to make the industry go bust. They are chasing AGI instead of small useful AI tools such as the ones I use with my own students.

Some firms will survive the bubble's bursting, but it will rival the crash of 2008.

Expand full comment
Hilary's avatar

I disagree about Nano Banana Pro. Unlike most AI models, it actually has a memorable name. And I think that that is a lot better than Google Gemini Image Generator 3 Pro Flash or whatever Google would otherwise be inclined to name it.

Expand full comment
David Armano's avatar

I love the simplicity of that first visual and think it’s spot on! Makes so much sense. Ironically I paid homage to Alanis Morrissett’s “Jagged Little Pill” when it comes to AI Adoption in the Enterprise. A complimentary take on AI Jaggedness!

https://davidarmano.substack.com/p/trust-leadership-and-ai-adoptions

Expand full comment
Hassan's avatar

human ability is fixed? WRONG

Thanks

Expand full comment
Elisabeth Andrews's avatar

I wonder if the “reverse salient” can sometimes be decision-maker trust/comprehension. I’m thinking of how regulatory toxicology keeps resisting the far more sophisticated research methods employed by pharmaceutical developers. Both groups are asking the same questions about chemical effects on biological systems but the regulator still asks “does it kill a rat?” while the drug discovery utilizes AI to help examine molecular, cellular, and systemic responses across evolutionarily diverse test species. Possibly getting to a better population-level comprehension of how AI analysis works would help design and deployment. I appreciate how you’re moving that forward, @Ethan Mollick!

Expand full comment