68 Comments

Current LLMs apparently aim to imitate an extremely well informed, well spoken, and polite human. I must confess that their admirable mix of patience, modesty, confidence and usefulness serves as a role model for me, and I find myself imitating AI.

Ooops.

Expand full comment

I agree with everything you've said. But it would be nice to have the option to choose a more diverse range of personalities. Much of the color of the human experience is given to us by those who aren't quite so "perfect".

Expand full comment

It's all in the prompt...

Expand full comment

With each newsletter from Ethan, I become more excited to read about the potential of AI. I devoured Co-Intelligence in a few days, plan to read it at least one more time, and recommend it to anyone who will listen to me.

(Disclaimer: I'm nearly 70 years old (and a former Philadelphian, but I still get pleasantly excited about new things.)

Expand full comment

Me too... and I am 70

(Not Philadelphian tho')

Expand full comment
May 14·edited May 14

I'm very excited by these new developments. As a paying customer, it does feel a little weird that everyone is getting the same level of access as me (albeit with a lower usage cap), but in the big picture / long term view of things, this is great. As you said, the effects on education and work are going to be huge - and I no longer need to plead with my students to try GPT4 "just for one month, and you'll see it's worth it" :')

I'm particularly interested in the voice change for GPT. I've been having regular conversations with it when walking to the grocery store and back, just asking whatever question comes to mind and learning while walking. Despite how enjoyable this experience is, it does start to feel less like a person and more like a system after a while - no one has human friends who are so consistently "average" and helpful in response to even the most inane questions.

I'm very excited for GPT 5 level capabilities, but equally excited to see the GPT 4 level integrate itself more seamlessly into our lives.

Expand full comment

I think the desktop app can turn out to be a big deal for paid (Mac) users.

Expand full comment

Oh true! But I'll reserve my opinion for when I can actually get my hands on it :)

Expand full comment

Obviously you are drinking their Koolaid. It is NOT AI and I wish people would stop calling it that.

Expand full comment

Ah, of course, you're right, random internet person commenting on an blog about AI, whose author has been involved in the implementation of AI research for quite a few years, and whose book is literally subtitled "living and working with AI".

I'm sure your perspective is the most accurate one here and I am just "drinking their Koolaid". Not like part of my job is teaching scientists how to use AI better or anything :)

Expand full comment

Perhaps you should keep your cynicism to yourself. As a teacher YOURSELF it really doesn’t help. FIFTY+ years in AI and not just SOME RANDOM PERSON ON THE INTERNET. That’s how NOT RANDOM I AM. Please don’t respond or write. I’m not interested in what you have to say.

Expand full comment

First of all, you initially commented on my post with your cynicism and aggressive "Obviously you are drinking their Koolaid", so your request for me not to reply, or to keep my cynicism to myself feels hilariously hypocritical.

Secondly, I respect your long service in AI development, but I don't think you necessarily have a better opinion of what defines AI than, say, Yann LeCun, Ilya Sutskever, or even Ethan Mollick. I cannot (and do not wish to) take away from your developments, and I'm sure that in their time, your contributions carried weight and may have pushed the field along. But today, even a linguist would debate you - the definition of a word is its current widespread use in society, regardless of historical origin (you know... as defined by the entire field of etymology).

Thirdly, while I expect you thought your "FIFTY+ years" comment was supposed to be a mic drop argument-of-authority, don't-you-know-who-i-am moment, it wasn't. In combination with your broad dismissal of LLMs, you're just painting yourself into the portrait of a bitter relic, unsatisfied that the technology he has worked on his entire life doesn't now look like what he wished it to be. Gen AI is not synonymous with AI, that's something I'm sure we can agree on. But Gen AI ≠ AI is a weird hot take for a supposed AI expert to adopt. The 100,000 citations on "attention is all you need" would disagree with you, as would the non-citing industry and commercial users of the technology.

Please do respond and write back. I am quite interested in what you have to say.

Expand full comment

The education and instruction space is in for a significant change. One that I believe many were looking forward to. The need for personalization and scale are such a problem, that now with a mix of GPTs and such multimodal instruction, will likely solve and expand. Considering this sort of tutoring, AI savvy teachers can scale up their impact more than ever before. It's an exciting time to be able to finally to meet education needs.

Expand full comment

As I considered all of this, I thought, again, terrific marketing ploy but more of the same from Sammy Boy. No closer to AGI and not so transparent as to the motivations. We(humanity) need to be careful of this. I would hope that there would be more CRITICAL comment and not from everyone who drank the OpenAI juice. Please, let’s have some reality here. By the way, I’m a computer scientist also with a Ph.D etc., etc.

Expand full comment

“Atlas from Boston Dynamics is a highly advanced humanoid robot designed for mobility and agility, making it a more formidable opponent compared to Spot. Here are some strategies and locations in Washington, D.C., to hide from a rogue Atlas robot …” said ChatGPT-4o. There’s no hiding from it. We put some guardrails up in the last generation for all y’all. Sam’s and Mira’s blithe approaches are fun to watch until someone puts an eye out - then they’re just hilarious. What a mess.

Expand full comment

Boston Dynamics mostly exists to make YouTube videos. Robots with that shape can't have a long enough battery life to oppress you - there's nowhere to put it.

IIRC Google also kept their patents when they sold them.

Expand full comment

Perhaps you missed New Atlas? And I’ve been an invited guest at the Waltham office, so I’m cool on their ability to produce videos. And while conversing with a British Spot in 3.5 was charming, New Atlas can plug itself and friends in, so obviating battery capacity. Your patent point? Say hi to New Atlas: https://youtu.be/29ECwExc-_M?si=LJQDPUp7TygHoE45

Expand full comment

How's it going to oppress you while plugged in?

Expand full comment

I suspect OpenAI had to backtrack to release the multi-modal Omni model. So essentially they had to implement a somewhat painful architectural change to get back on track. I feel, being multi modal and having a more human like voice interface is a step towards AGI. I feel the next release of Chat GPT will be the telling one. With the key question being- is the next model significantly better than omni, or only slightly better. If it is only slightly better, my pick would be AGI is a long way off. And I am fine with a slow takeoff.

Expand full comment

As excited as I am by the multimodal experience (when it rolls out), the model seems only marginally better than it was. I've been testing it since yesterday, and Claude 3 Opus is still insanely better (in my writing-intensive, analytically-intensive, no-coding use of the tool).

Expand full comment

I think this is an amazing time of great discovery. I have hopes, despite my general pessimism, that we humans can win some time back for humanity and creativity through delegating the rest to the machine. That is, if climate change doesn’t wipe our race from the planet before then. (Oops, the pessimism rises again.)

Expand full comment

That is if the AGIs let us. You should not assume this is a GOOD thing.

Expand full comment

Have you read https://marshallbrain.com/manna1 ? There are definitely two paths. I can see both clearly.

Expand full comment

Thanks for sharing that. Sadly, I find the first (robotic ultra capitalist) path to be the far more realistic one.

I was once in a women’s studies course where, instead of the all chairs facing the front, the professor had organized the classroom in a circle. She stated explicitly that she did not want to use the expert-led model, and sought (with the room arrangement and her discussion model) to encourage a peer-led model.

The most disappointing aspect of that experience was how my fellow classmates responded. Having been trained in the expert-led model, they resisted the professor and her method. They believed they weren’t being taught and generally disengaged from the classroom process.

The second path of the Manna story presumes that, having been saved, the utopia residents would cooperate. But I already live in a Western country with more “freedoms” than many other places within the capitalist model. It takes just a few to exploit, entangle, and upend systems designed to protect many — to destroy paths to cooperation and to perceive allowing difference (or merely acknowledging diversity) as weakness.

The story, for example, relies on all newcomers being treated as equals, ignoring global tribal tendencies that treat all newcomers/immigrants as threats and to see all resources as too finite or too precious to share . (I’ve seen this phenomenon even within leftist organizing groups. It’s WILD.)

It’s entirely possible that being raised within systems of exploitation and denigration that I find it challenging to see an alternative. I wish that were not true.

Expand full comment

Note: I let Grammarly.com help with this one just a little bit. So I cheated, teacher. Large language transformers are just too good to ignore... :-)

As a co-design systems architect, I should probably pass along some technical information about the economics of Generative AI:

There is no money in giving this technology away. The money is in getting people to use it so you can sell them something. That is why someone who works at the Wharton School has become one of the most important AI/ML consultants in the United States. Another important consultant is David Patterson, the founding director of the Shenzhen AI Research Lab here in Silicon Valley, California.

I contributed to the "Computer Architecture: A Quantitative Approach" as a Ph.D. student at the University of Wisconsin at Madison. I wrote a few problems for $300 each and put way too much effort into them, but I'm proud of them.

A for-profit company is advertising all over the USA to replace people who answer the telephone for businesses. OpenAI's ChatGPT-4o will do it much better. That company is in Florida, and it may fail.

Expand full comment

The fact that **both** Sutskever and Leike resigned from OpenAI the day after this shipped does not inspire confidence regarding the overall direction of this company, despite whatever bells and whistles they use to distract us from their departures.

Expand full comment

> Figuring out how to get employees to share what they are developing (and managing security and risks) will be a challenge for many organizations.

Why should they? I'm honestly looking forward to the contradictions of capitalism coming full circle. Companies don't own people. Let people create motes around their own automation. Or maybe we embrace a more collaborative commons economy, like FOSS.

Expand full comment

I am astonished at the pace of change in this space! In Ethan's book, Co-intelligence, he was talking about some of the potential stuff we COULD do in future LLM's, and now, some of those possibilities are here! In the span of less than 6 months!

Expand full comment

The world of corporate training ($370 billion global market) should be transformed by the ubiquitous presence of chatGPT 4 (and what comes next).

That training market needs to rapidly evolve to embrace a deeply human-to-human approach as both subject and method in my view.

Thanks again for your thoughtful commentary Ethan.

Expand full comment

While the data analysis capabilities of GPT4 (whether o model or not) are indeed quite impressive, it doesn't seem able to extract data from PDFs. I asked the o model to build a simple discounted cash flow analysis for Apple, and provided it a PDF of Apple's most recent annual report. It built a simple DCF, which in its own right is pretty impressive, but it did so with simulated data that it made up. Link to chat is here: https://chat.openai.com/share/5b598a48-4766-43ac-b9ef-30265ca4a9d3

Expand full comment

The inability to work well with PDF's is a major achilles heel of GPT-4, despite the hype. It does a poor to superficial job analyzing any document of reasonably length or complexity. I have found Claude3 much, much better with PDF's.

Expand full comment

The Achilles heal of ChatGPT is it's inability to say "I don't know". Instead it just makes something up. There's no trust creation. Smart folks will be constantly validating responses for accuracy, essentially doing much of the work themselves.

Expand full comment

Yes, this is definitely true.

Expand full comment

As you correctly write in the paragraph about education: "Cheating will become ubiquitous".

AI in general and ChatGPT in particular promote the final stupidization of humanity. And that's a good thing, because governments - all over the world, not just in totalitarian regimes - need a dumbed-down population that has not only forgotten how to think independently and freely, but actively rejects it.

This is the only way regimes that are obviously led by sick and mentally disturbed people can stay in power.

Expand full comment

I'm not sure that is what AI is going to actually cause. I think of it like an always available teacher. I honestly hope that's what prevails. The issue with the education system as it is today, is that it emphasizes memorization and individual performance. It does not encourage people to work together, and shore up each other's weaknesses. Maybe AI can do that.

Expand full comment

--and one more thing--

With the democratization of advanced language models like GPT-4o, we're gonna need to start looking at AI education from high-school level onwards. Teaching students how to effectively use AI,

ethical considerations, etc. If done well, this early education will enhance critical thinking, prepare students for AI-driven careers, and ensure equitable access to AI literacy. If done poorly (or not at all), we might run the risk of young people essentially taking the path of least resistance: outsourcing all of their thinking to the machine.

We'd also need teacher training, as I suspect a single generation from now schoolchildren are going to be far, far more adept at AI use than current teachers.

Expand full comment

It won't even take a single generation. If teachers don't figure this out soon, they will either hopelessly be playing catch up or retiring early.

Expand full comment

I think this is an opportunity for more of a shift in the fundamental pedagogy approach but until we can get to a space where teachers are paid and trusted, we will continue to move at the speed allowed by standardized assessments and unqualified people in the classroom

Expand full comment

I'm hoping this means that GPTs will no longer be paywalled. Wouldn't public access to a GPT that someone has written be the best way to sell ChatGPT in general and then upsell those that would like to author? I get that is not a great business model, but keeping the best GPTs from non-subscribers seems doomed from the start.

Expand full comment

They will be available to all, as per the info on OpenAI's website... GPT-4o versions I guess

Expand full comment