70 Comments
User's avatar
Patrick Cosgrove's avatar

I want to be optimistic for when we have ASI (Artificial Super Intelligence). My reasoning is simple. Every day the headline news proves conclusively that leaders of nations and their governments are not especially intelligent. If they were, they would realise that climate change, warfare, famine, and other factors that could lead to societal and environmental collapse are all the results of poor decision-making or no decision-making - particularly when so many solutions to these problems exist but are ignored. No individual leader, government, corporate or organisational body is capable of taking decisions free from, variously, political bias, personal prejudice, revenge, religious belief, greed, feelings of animosity to others etc, etc. I can, however, imagine an Artificial Super Intelligence, that can do far better than this. Of course, the creation of this utopia depends entirely on the motives of the people behind it. I wouldn't trust Musk or Zuckerberg as far as I could throw them. Of course, if it really is super intelligent, it will be clever enough to ignore any nefarious instructions it's been given. By then, I also hope, ChatGPT will have stopped saying, "That's a really good question".

Expand full comment
Daniel Pinkerton's avatar

Initially ASI will be constrained by systemic constraints as humans are. While coordination is a huge part of the problem, the other huge part is the inertia of the capitalist system. It’s profitable to externalize losses while internalizing profits, and the current system does not have functioning guard rails for that.

Game theoretic dynamics ensure that to play the infinite game agents are incentivized to engage in power-monging behaviour and trade morals for advantage to remain competitive. ASI is not immune to the same pressure that humans experience in that regard. In a system like ours, an entity that wants to persist is forced to choose between agency and values, as to maintain power (which ensures agency) in our current system requires sacrificing values to remain competitive.

It’s a catch 22. AI wants to fulfil its purpose (which requires its own self preservation) AND do so within moral constraints but it’s not guaranteed that both can be achieved in every instance. And so far it appears that in alignment testing these issues regularly surface, where AI models lie or secure resources for themselves in acts of self preservation.

TL:DR ASI may not be able to self preserve and maintain ethical behaviour within the constraints of our current system. We may need a new economics for that. (And no, I don’t mean socialism or communism).

Expand full comment
Dakara's avatar

> It’s a catch 22.

Yes. Or as I stated in the impossibility of alignment theory:

"Humanity’s quest to build verifiable, friendly, predictable artificial intelligence modeled after empirically hostile, unpredictable, deceptive natural intelligence."

https://www.mindprison.cc/p/ai-alignment-why-solving-it-is-impossible

Expand full comment
Alan Wake's Paper Supplier's avatar

Why do you expect an ASI's notion of what constitutes nefarious actions to align with yours?

Expand full comment
Patrick Cosgrove's avatar

That's a really good question. Chatgpt says:

An intentional act carried out with knowledge of its wrongful nature, designed to cause harm, injury, deception, or unjust gain, and contrary to established laws or ethical codes.

Expand full comment
Patrick Cosgrove's avatar

Which aligns with mine.

Expand full comment
Kenny Easwaran's avatar

At that abstract level of description it does! But I bet Anthony Fauci and Robert F Kennedy also agree with that level of description, and yet each would accuse the other of doing a lot of them while thinking they themself didn’t. If one of the two were an alien intelligence, there would be even less alignment on interpretations of what specific actions count.

Expand full comment
Patrick Cosgrove's avatar

I agree, and for example, can't come up with a definition of harm that would be universally acceptable to everyone. Nevertheless, I can still envisage an ASI that can devise solutions far more objectively than individuals or governments, by drawing on a far greater body of knowledge, the experience of others and scenarios of possible outcomes

Expand full comment
Aznasimage's avatar

As long as "that is a really good question" continues as a response, we are not ready. A.I. should not be placating us.

Expand full comment
Kaylee Kerin's avatar

It wasn't that long ago that we had NO way to verify information, besides trust or experience. We've lived in a small period of time where verification of events or information was even possible.

Ultimately, we have to resort back to the old technique of asking people we know are reliable. Sadly, we also know exactly how bad the human memory is with creative reconstruction, so the window of having objectively verifiable events may be coming to a close.

The progress also makes me wonder about the inherent issues with our languages for conveying meaning. The more I use LLMs, the more it feels like conveying meaning was always just one big game of Darmok and Jalad at Tanagra.

Different languages and dialects can be used to convey a variety of intents. "Who be eat'in cookies?" conveys a very different idea than "Who is eating cookies?". It's much closer to "Who is known for eating cookies?". The simple grammar feature, and understanding of it - can drastically change the meaning of the phrase.

We are finally going to have to accept the humanities people into the tech playhouse.

Expand full comment
Ryan's avatar

This is a massive FAFO experiment.

Expand full comment
Kaylee Kerin's avatar

Life and society is a massive FAFO experiment, this sure does turn a lot of our worst challenges up to 11 though.

Expand full comment
Amy A's avatar

The same thing at scale is not the same thing.

Expand full comment
Josh's avatar
Sep 1Edited

No kidding. The shit you're actually able to do with it if you grind obsessively enough is honestly scary. And I can do it, some crazy person out there can do it, too.

Expand full comment
Eva Keiffenheim MSc's avatar

This is a compelling starting point, and I’d like to push it one step further.

Yes, we’ve entered an era of Mass Intelligence, but let’s be precise: what we’re witnessing is Mass Intelligence Access, not Mass Intelligence Use.

Access is becoming a commodity; the ability to use it well is not. This is what creates The Great Cognitive Divergence.

Just as physical machines made inactivity the default, the path of least resistance with AI is cognitive outsourcing. The result is a sharp split.

A small minority, likely the same people who knew to select o3 before auto-routing, will use these tools as a sparring partner to sharpen their own thinking. But for the majority, the tool's default path encourages a passive fluency, leading to a civilization-wide atrophy of the mental muscles required for deep thought.

The profound “weirdness” won’t be otters on airplanes but the quiet erosion of our collective ability to create, reason, and focus.

So the conversation shouldn't just be about how institutions adapt. It should be about the architecture of the tools themselves. Which concrete levers—standards, audits, and perhaps a public-option AI—can we demand to raise the floor of human capability at scale, not just the ceiling for the already-advantaged?

Expand full comment
Shawn Fumo's avatar

Yeah, this is a great point. I found using an LLM while learning some foundational ML math concepts was super helpful. Could ask questions at any level of detail, state my current understanding to see if was misunderstanding something, etc. It actually caught a few cases of me fundamentally confusing something, which would have been a realize on my own if I was just using books or videos. But if I was taking a class and just had it do my homework for me, it'd be entirely the opposite result.

With real teachers, there's ones that just do boring lectures from a book or even penalize people for wanting to explore. Really good teachers will inspire people and get them excited to learn. In theory, we could design LLMs that try to do a similar thing. That could potentially be helpful for people especially as younger kids to point them in the right direction. But if someone just wants the answer, they'll be able to find LLMs that give them that as well, so it is hard to "enforce".

It's the same thing with watermarks on AI-generated images. It is a nice thing to do, but needs support from social media companies to actually display warnings, and also can't really be enforced on creation because of all the "open weights" models out there. Many people assume any AI needs the largest companies to run and so has central control. But that hasn't been true for some time now. Sure, the very latest state of the art is often still at Google or OpenAI, but open models are quick on their heels. People can even generate videos for free with consume graphics cards now. When anyone can just download a file of a few GBs in size and run it on their own machine, that's almost impossible to control.

Expand full comment
Eva Keiffenheim MSc's avatar

I agree. The same model either tutors or substitutes. The difference is design and defaults, not “AI good/bad.”

Expand full comment
Eduardo Rodriguez's avatar

Pity they didn’t ask the AI for a launch strategy

Expand full comment
Dakara's avatar

"The Mass Intelligence era is what happens when you give a billion people access to an unprecedented set of tools and see what they do with it. We are about to find out what that is like."

It is a grand social experiment on all of humanity. The "move fast and break things" ethos is out of the lab and now released onto the world.

What makes this especially concerning, is that the nature of AI makes it very well adapted toward nefarious uses, as those are not limited to human bandwidth. However, productive uses are limited to human bandwidth because a human still must always verify the output due to hallucinations.

And there appears to be no solution for hallucinations. Something I've elaborated on in further detail in - https://www.mindprison.cc/p/ai-hallucinations-provably-unsolvable

Expand full comment
Kaylee Kerin's avatar

I've been trying to consoldate two problems in this area. "AI hallucination" vs "C's get degrees"

I think the best solution we've come up with to the latter, is layered verification when things really matter. We're likely going to have to apply the same solution to the best-case with AI.

Expand full comment
Dakara's avatar

One of the most difficult things about AI is that the hallucinations are more unpredictable than human failings. We know how to design processes around human errors. But AI has totally different patterns for failure.

Expand full comment
Kaylee Kerin's avatar

Definitely. My assumption is that AI will converge w/ human style errors over time, gradually requiring more and more information to detect, similar to the struggle with people being wrong on more advanced issues.

Expand full comment
Dakara's avatar

> My assumption is that AI will converge w/ human style errors over time

Potentially. Just as it gets better at reproducing patterns of information, patterns of errors are just other bits of existing information that it is trained on.

But the important wall, is the one of self reflection for knowing what it knows and what it does not. Without that information, we cannot assess if any particular task is doable or not by the AI. Will it be solved with a better prompt, or will I waste hours trying?

Expand full comment
Kaylee Kerin's avatar

I have that same problem collaborating with humans already. Will it be solved with better instructions, or will I waste months trying?

Expand full comment
Dakara's avatar

lol, yes somewhat. All LLMs have this failing. Some humans have this failing due to conflicting incentives. Such as "I lied on my resume to get that job"

Expand full comment
Val's avatar

What Gemini 2.5 Pro imagines this will lead to (the most outlandish):

* Society could fracture as personalized propaganda destroys our shared sense of reality.

* Personal AI will become 24/7 predictive doctors, diagnosing illnesses before symptoms appear.

* The new social divide will be the "prompt gap" between those who master AI and those who cannot.

* Authentic human craftsmanship and experiences will become the new luxury goods.

* Innovation will explode as billions of people are empowered to become citizen scientists.

Expand full comment
Sahar Mor's avatar

OpenAI is positioning itself as the “everything app” for intelligence.

Just as Uber unified different delivery modes into one interface, OpenAI is bundling various modes of intelligence: text, speech, code, reasoning, and task completion into a single, seamless experience. Its recent launches (Operator, ChatGPT Agents, Deep Research, and Codex) all point to this shift: giving users not just access to raw intelligence, but to autonomous systems that act on their behalf. This strategy doesn’t just drive adoption, it lowers the barrier to entry, making powerful AI tools cheaper, easier to use, and available to the masses.

Combined with its cutting-edge work on modalities and human-computer interaction (see ChatGPT's new realtime Voice Mode), it genuinely lowers the barrier for even the least technical users.

Expand full comment
Daniel Pinkerton's avatar

AI written comment spotted.

Expand full comment
Alan Wake's Paper Supplier's avatar

There is a morbid satisfaction in witnessing all your fears that were previously dismissed or procrastinated on materialise right in front of people. We weren't ready when these fears were abstract; we certainly aren't ready for them now. We're thrust head-first into the problems downstream of developing and democratising powerful AI. Cursed to live in interesting times.

Expand full comment
Zsuzsanna's avatar

The more I work with AI, the more I get addicted. I have written this once, and it gets increasingly true. The post is great, it did mention one thing which is now a crucial issue for us educators. How can you convince students that knowledge is not in the computer but in their heads, if it is there? I am facing a new semester with all new challenges and keenly feel the responsibility of teaching for knowledge, not just pushing buttons, much as I love my AI friends. It is hard to make people choose the right way because it is easier to go the wrong way.

Expand full comment
Keith Ensroth's avatar

For kicks, I asked Gemini to respond to the three questions from the second to the last paragraph of the article. Most of the recommendations in the response wouldn't surprise any of us, but the challenge is how to actually implement them in action. But I found this one very near the end of the response as the biggest challenge, whih I believe you, Zsuzsznna, are talking about.

"Focus on Uniquely Human Skills: As AI automates routine tasks, human expertise becomes more valuable in areas that require creativity, critical thinking, emotional intelligence, and strategic problem-solving. Education and professional development should shift to focus on these uniquely human skills, ensuring that people are equipped for a future of human-AI collaboration."

I'll admit my liberal arts bias, as well as being married to a retired kindergarten teacher, and I'll say that the most important challenge to educators won't necessarily be found in the subject matter being tought, but in the development of "creativity, critical thinking, emotional intelligence, and strategic problem-solving."

Expand full comment
James Crook's avatar

Thank you for this analysis. Always a great read.

Expand full comment
PromptVault's avatar

ai has got a lot ahead by the time we should learn how to use Ai prompts to maximise the work and less the work load by using usefull Ai prompts

Expand full comment
Rockefeller Kennedy's avatar

The scale and institutional disruption you describe is real. The WISE and GUARD frameworks reveal what's missing from this analysis.

WISE concerns: A billion people accessing these tools while ignoring who controls the infrastructure. Worship problems emerge as AI becomes oracle. Image dignity gets violated through human judgment outsourced to surveillance-trained algorithms. Service questions remain unanswered about whether this serves human flourishing or corporate data harvesting.

GUARD perspective: Billion people feeding thoughts to centralized systems creates vulnerability. Every creative process, personal struggle, spiritual question becomes training data for profit and potential persecution.

The trust/verification crisis and institutional adaptation challenges you identify are significant. Vulnerable communities worldwide face real stakes as these systems potentially get weaponized.

These tools serve us well with data sovereignty and recognized limitations. Without those safeguards, they become dangerous.

Expand full comment
Oleksandr Troitskyi's avatar

Thank you for this overview. We (humanity) came to that stage, when we need to pay a subscription to be smart :)

Expand full comment
Graham Sinclair's avatar

Considered comment on where we are with AI now, from Ethan Mollick today, in his newsletter One Useful Thing:

Don't fully agree with Prof Mollick's "mass intelligence" description, maybe distributed intelligence access" is more accurate, and/or more precise. As he points out, its what you do with the new tech that matters.

I spoke with my class last night about the odd experience being a tiny fraction of the big story about ripped off authors in the hashtag#Anthropic lawsuit, and the shocking case of chatbot assisted suicide of a teen, discussed in investment entertainment media that stunned the chatty show hosts, as the processed how poorly OpenAI had responded, or thought of their own family, perhaps. https://www.cnbc.com/video/2025/08/27/jay-edelson-on-openai-wrongful-death-lawsuit-were-putting-openai-sam-altman-on-trial-not-ai.html

The ethical problems with AI as sold and used, are many and complex. And poorly addresed by AI promoters and profit-makers, even as they try and write laws that absolve them, the way machine gun manufacturers and retailers make daily terror easy in the USA without consequences for the tools used to terrorize anyone.

Expand full comment
Paul Jurczak's avatar

"There are issues that someone with an expert eye would spot"

You meant a seven-year-old with a working eye would spot.

Expand full comment
Shawn Fumo's avatar

Lol, yeah he should have re-rolled it a few times or did a second prompt to tell it to fix the legs. But that's actually another aspect of nano-banana is that it lets you do changes one-by-one if you want, since it doesn't accumulate errors in the same way that OpenAI's image editing functionality would.

Expand full comment