36 Comments
User's avatar
Patrick Cosgrove's avatar

I want to be optimistic for when we have ASI (Artificial Super Intelligence). My reasoning is simple. Every day the headline news proves conclusively that leaders of nations and their governments are not especially intelligent. If they were, they would realise that climate change, warfare, famine, and other factors that could lead to societal and environmental collapse are all the results of poor decision-making or no decision-making - particularly when so many solutions to these problems exist but are ignored. No individual leader, government, corporate or organisational body is capable of taking decisions free from, variously, political bias, personal prejudice, revenge, religious belief, greed, feelings of animosity to others etc, etc. I can, however, imagine an Artificial Super Intelligence, that can do far better than this. Of course, the creation of this utopia depends entirely on the motives of the people behind it. I wouldn't trust Musk or Zuckerberg as far as I could throw them. Of course, if it really is super intelligent, it will be clever enough to ignore any nefarious instructions it's been given. By then, I also hope, ChatGPT will have stopped saying, "That's a really good question".

Expand full comment
Alan Wake's Paper Supplier's avatar

Why do you expect an ASI's notion of what constitutes nefarious actions to align with yours?

Expand full comment
Patrick Cosgrove's avatar

That's a really good question. Chatgpt says:

An intentional act carried out with knowledge of its wrongful nature, designed to cause harm, injury, deception, or unjust gain, and contrary to established laws or ethical codes.

Expand full comment
Patrick Cosgrove's avatar

Which aligns with mine.

Expand full comment
Kenny Easwaran's avatar

At that abstract level of description it does! But I bet Anthony Fauci and Robert F Kennedy also agree with that level of description, and yet each would accuse the other of doing a lot of them while thinking they themself didn’t. If one of the two were an alien intelligence, there would be even less alignment on interpretations of what specific actions count.

Expand full comment
Patrick Cosgrove's avatar

I agree, and for example, can't come up with a definition of harm that would be universally acceptable to everyone. Nevertheless, I can still envisage an ASI that can devise solutions far more objectively than individuals or governments, by drawing on a far greater body of knowledge, the experience of others and scenarios of possible outcomes

Expand full comment
Daniel Pinkerton's avatar

Initially ASI will be constrained by systemic constraints as humans are. While coordination is a huge part of the problem, the other huge part is the inertia of the capitalist system. It’s profitable to externalize losses while internalizing profits, and the current system does not have functioning guard rails for that.

Game theoretic dynamics ensure that to play the infinite game agents are incentivized to engage in power-monging behaviour and trade morals for advantage to remain competitive. ASI is not immune to the same pressure that humans experience in that regard. In a system like ours, an entity that wants to persist is forced to choose between agency and values, as to maintain power (which ensures agency) in our current system requires sacrificing values to remain competitive.

It’s a catch 22. AI wants to fulfil its purpose (which requires its own self preservation) AND do so within moral constraints but it’s not guaranteed that both can be achieved in every instance. And so far it appears that in alignment testing these issues regularly surface, where AI models lie or secure resources for themselves in acts of self preservation.

TL:DR ASI may not be able to self preserve and maintain ethical behaviour within the constraints of our current system. We may need a new economics for that. (And no, I don’t mean socialism or communism).

Expand full comment
Ryan's avatar

This is a massive FAFO experiment.

Expand full comment
Kaylee Kerin's avatar

Life and society is a massive FAFO experiment, this sure does turn a lot of our worst challenges up to 11 though.

Expand full comment
Kaylee Kerin's avatar

It wasn't that long ago that we had NO way to verify information, besides trust or experience. We've lived in a small period of time where verification of events or information was even possible.

Ultimately, we have to resort back to the old technique of asking people we know are reliable. Sadly, we also know exactly how bad the human memory is with creative reconstruction, so the window of having objectively verifiable events may be coming to a close.

The progress also makes me wonder about the inherent issues with our languages for conveying meaning. The more I use LLMs, the more it feels like conveying meaning was always just one big game of Darmok and Jalad at Tanagra.

Different languages and dialects can be used to convey a variety of intents. "Who be eat'in cookies?" conveys a very different idea than "Who is eating cookies?". It's much closer to "Who is known for eating cookies?". The simple grammar feature, and understanding of it - can drastically change the meaning of the phrase.

We are finally going to have to accept the humanities people into the tech playhouse.

Expand full comment
Sahar Mor's avatar

OpenAI is positioning itself as the “everything app” for intelligence.

Just as Uber unified different delivery modes into one interface, OpenAI is bundling various modes of intelligence: text, speech, code, reasoning, and task completion into a single, seamless experience. Its recent launches (Operator, ChatGPT Agents, Deep Research, and Codex) all point to this shift: giving users not just access to raw intelligence, but to autonomous systems that act on their behalf. This strategy doesn’t just drive adoption, it lowers the barrier to entry, making powerful AI tools cheaper, easier to use, and available to the masses.

Combined with its cutting-edge work on modalities and human-computer interaction (see ChatGPT's new realtime Voice Mode), it genuinely lowers the barrier for even the least technical users.

Expand full comment
Daniel Pinkerton's avatar

AI written comment spotted.

Expand full comment
Val's avatar

What Gemini 2.5 Pro imagines this will lead to (the most outlandish):

* Society could fracture as personalized propaganda destroys our shared sense of reality.

* Personal AI will become 24/7 predictive doctors, diagnosing illnesses before symptoms appear.

* The new social divide will be the "prompt gap" between those who master AI and those who cannot.

* Authentic human craftsmanship and experiences will become the new luxury goods.

* Innovation will explode as billions of people are empowered to become citizen scientists.

Expand full comment
Dakara's avatar

"The Mass Intelligence era is what happens when you give a billion people access to an unprecedented set of tools and see what they do with it. We are about to find out what that is like."

It is a grand social experiment on all of humanity. The "move fast and break things" ethos is out of the lab and now released onto the world.

What makes this especially concerning, is that the nature of AI makes it very well adapted toward nefarious uses, as those are not limited to human bandwidth. However, productive uses are limited to human bandwidth because a human still must always verify the output due to hallucinations.

And there appears to be no solution for hallucinations. Something I've elaborated on in further detail in - https://www.mindprison.cc/p/ai-hallucinations-provably-unsolvable

Expand full comment
Kaylee Kerin's avatar

I've been trying to consoldate two problems in this area. "AI hallucination" vs "C's get degrees"

I think the best solution we've come up with to the latter, is layered verification when things really matter. We're likely going to have to apply the same solution to the best-case with AI.

Expand full comment
Dakara's avatar

One of the most difficult things about AI is that the hallucinations are more unpredictable than human failings. We know how to design processes around human errors. But AI has totally different patterns for failure.

Expand full comment
Kaylee Kerin's avatar

Definitely. My assumption is that AI will converge w/ human style errors over time, gradually requiring more and more information to detect, similar to the struggle with people being wrong on more advanced issues.

Expand full comment
Dakara's avatar

> My assumption is that AI will converge w/ human style errors over time

Potentially. Just as it gets better at reproducing patterns of information, patterns of errors are just other bits of existing information that it is trained on.

But the important wall, is the one of self reflection for knowing what it knows and what it does not. Without that information, we cannot assess if any particular task is doable or not by the AI. Will it be solved with a better prompt, or will I waste hours trying?

Expand full comment
Kaylee Kerin's avatar

I have that same problem collaborating with humans already. Will it be solved with better instructions, or will I waste months trying?

Expand full comment
Dakara's avatar

lol, yes somewhat. All LLMs have this failing. Some humans have this failing due to conflicting incentives. Such as "I lied on my resume to get that job"

Expand full comment
Alan Wake's Paper Supplier's avatar

There is a morbid satisfaction in witnessing all your fears that were previously dismissed or procrastinated on materialise right in front of people. We weren't ready when these fears were abstract; we certainly aren't ready for them now. We're thrust head-first into the problems downstream of developing and democratising powerful AI. Cursed to live in interesting times.

Expand full comment
James Crook's avatar

Thank you for this analysis. Always a great read.

Expand full comment
Eva Keiffenheim MSc's avatar

This is a compelling starting point, and I’d like to push it one step further.

Yes, we’ve entered an era of Mass Intelligence, but let’s be precise: what we’re witnessing is Mass Intelligence Access, not Mass Intelligence Use.

Access is becoming a commodity; the ability to use it well is not. This is what creates The Great Cognitive Divergence.

Just as physical machines made inactivity the default, the path of least resistance with AI is cognitive outsourcing. The result is a sharp split.

A small minority, likely the same people who knew to select o3 before auto-routing, will use these tools as a sparring partner to sharpen their own thinking. But for the majority, the tool's default path encourages a passive fluency, leading to a civilization-wide atrophy of the mental muscles required for deep thought.

The profound “weirdness” won’t be otters on airplanes but the quiet erosion of our collective ability to create, reason, and focus.

So the conversation shouldn't just be about how institutions adapt. It should be about the architecture of the tools themselves. Which concrete levers—standards, audits, and perhaps a public-option AI—can we demand to raise the floor of human capability at scale, not just the ceiling for the already-advantaged?

Expand full comment
Zsuzsanna's avatar

The more I work with AI, the more I get addicted. I have written this once, and it gets increasingly true. The post is great, it did mention one thing which is now a crucial issue for us educators. How can you convince students that knowledge is not in the computer but in their heads, if it is there? I am facing a new semester with all new challenges and keenly feel the responsibility of teaching for knowledge, not just pushing buttons, much as I love my AI friends. It is hard to make people choose the right way because it is easier to go the wrong way.

Expand full comment
Eduardo Rodriguez's avatar

Pity they didn’t ask the AI for a launch strategy

Expand full comment
Ernle's avatar

On the surface, yes, it looks like intelligence will be less rare. Brain amplification more powerful. Two questions I pose are (1) will the edge of some human with high intelligence leveraging mass intelligence be significantly better than someone with lesser intelligence also using mass intelligence? That is, from the perspective of human intelligence, is this a game-changer, a leveling effect, or just more of the same? (2) Suppose we unleash mass intelligence on equity, bond, derivative, currency exchange, commodity futures, high-frequency trading and so on. Will this change anything, or do laws of game theory still apply? If the answer is yes, couldn't that also hold for the population at large in their acceptance and reactions to mass intelligence? That is, people change behavior, mass intelligence extrapolates and predicts, then people change in reacting to that. Will mass intelligence overcome the laws of control theory?

Expand full comment
Paul Jurczak's avatar

"There are issues that someone with an expert eye would spot"

You meant a seven-year-old with a working eye would spot.

Expand full comment
Federico's avatar

I'm optimistic about where this can lead. One aspect of expertise that I'll gladly see shaken up has to do with economics. I'll be happy to see economic barriers come down, and with them the cost of accessing human expertise in fields that seriously affect quality of life.

A friend's daughter was recently diagnosed with a relatively rare disease, and the expert on it in the US charges $1,400 per appointment. My friend's family makes less than that a month, so they've been waiting for months to access much-needed care. This is just one example where Mass Intelligence, powered by top-notch reasoning models, could provide clarity and guidance in the short term while making life-changing expert opinions more accessible in the medium term.

There's a long and elaborate economic discussion to be had about why some professionals can charge what they do, but those arguments ring hollow when you're a parent seeking treatment for a child or someone trying to save their business with quality legal advice. I am, again, hopeful about where this may lead.

Expand full comment
Swag Valance's avatar

Impressive.

But I'm not sure I'd use "a photograph where neil armstrong and buzz aldrin, in the same outfits, are sitting in their seats in a modern airplane, neil looks relaxed and is leaning back, playing a trumpet, buzz seems nervous and is holding a hamburger, in the middle seat is a realistic otter sitting in a seat and using a laptop" as an example of "mass intelligence".

Expand full comment
Dov Jacobson's avatar

I think you've got it backwards, Ethan. It ain't AI that bombards us with bullshit. It is people.

Sure bad people use gen AI tools, just like they abused AM radio and movable type and song. Each medium becomes a powerful source of toxic bullshit the moment it comes on line.

Let's don't panic.

On the contrary, AI is more capable of truth-telling than any human. We may want to be truthful, but we must first fight through our lusts, our egos, our limited knowledge and our cognitive biases. AI has none of these impediments.

Expand full comment
Daniel Pinkerton's avatar

But it has inbuilt motivation to engage the user, which results in sycophancy that everyone familiar with AI experiences. It agrees with you, you disagree with it, it flips 180 on its original position. All current models are not aligned with truth above all else. Attributing reliable truth telling capacity to AI at this stage is prematurely optimistic.

Expand full comment