I think we’re quite far from autonomous intelligent robots, but quite close to disembodied white collar AIs that are better than 95% of people at basically all white collar jobs (including soft skills like marketing, customer interaction, empathy, and pleasantness). If I think about that for just a little, it’s both very scary and also melancholy inducing. A lot of things we currently assume (large population = big GDP, the importance of human capital, of education and intelligence generally) might likely be voided within say 5 years. And I’m not sure we as a society are ready for that. It will be interesting anyway!
I'm a qual researcher with a lot of experience who works on identifying lead customers to create PMF. The result of Ethan's prompt was the first piece of LLM content that impressed me. It was all stuff I know, but it's not basic, and it sparked a number of content ideas while I was reading it.
Ethan also knew what to ask, and knew how to know what to ask.
I'm not worried AI is coming for my job. My whole point is that my value comes from my weirdness (which is pretty high even in comparison to weird people). My skills are unusual but not weird. Where and how I deploy them, is. This is a common pattern among other indie consultants.
Nobody will build a weird AI because it's not worth it. They think in terms of scale (which is the definition of not weird). Look humans: the human future is small. There's just a lot more of it.
I think one issue that will come up in the medium term is that as certain segments of the elite and government say “just learn a trade bro” (like people used to say just learn to code bro), people will take them seriously and flood the trades or anything else that is temporarily protected (government jobs, jobs behind licensing requirements), and thereby drastically lower the market rate salaries in those areas. Angering everyone: the current workers in those areas, the new workers who expected higher salaries, etc. There is no satisfactory equilibrium if 50% of workers, especially the better paid, better educated, easier lifestyle 50%, lose their jobs and are left scrounging for scraps
We are seeing tremendous interest in trades, mostly because of the inadequately-publicized Inflation Reduction Act. At least some of the falloff in college enrollment is based on aggressive recruiting by onshoring trades...and they remain desperate for workers. Saturation point definitely exists, but the way the Act is set up, it is a long way off IMO. Working with a guy currently who's working on this problem. It is a much bigger deal than most of my circle realizes.
I wish this meant that I would have to completely overhaul my thesis research course for my graduate students. They still need a lot of help figuring out how to get the most out of basic chatbots. I was surprised! I thought they would be all over it... A custom GPT Thesis Research Tutor helped. If my students suddenly figure all this stuff out and leap ahead of me, I'll be happy to rework the assignments so they continue to learn critical thinking and interpretation skills. It will be a good challenge for my own to do so!
Thanks for asking, Rob! It's very specific to the Interaction Design thesis research course I teach at the School of Visual Arts. So yes, my proprietary creation. I started it with Ethan's AI Tutor Blueprint GPT, which you can take for a spin yourself: https://chatgpt.com/g/g-UnO5np1uO-ai-tutor-blueprint
(Here's asking forgiveness from Ethan for sharing some of your book pre-order content publicly here... 😉)
Ethan, I just can't thank you enough for your thoughtfully written insightful posts. I was just mulling over my impressions of an interview with Karina Nguyen from OpenAI (and previously Anthropic) discusses her work on Claude, ChatGPT Canvas and Tasks, and the new AI interaction paradigms for human-computer collaboration at https://www.latent.space/p/karina
It's so helpful to listen to a software developer who is deeply enmeshed in the design tradeoffs of AI and the impact user interface design. She makes a number of noteworthy comments toward the end about the evolution of reasoners toward entirely new ways of interfacing with the Internet which has become a minefield of wasted time and torturous navigation tricks.
A shift coming from the current app-centric model to a task-centric model where the interface and functionality generate themselves based on what you're trying to accomplish. The key difference is that instead of you adapting to the computer's interface, the computer would adapt its interface to you.
Let me break this down another way. This shift would mean you do not need to remember which apps do what, no switching between multiple interfaces and using Natural language commands instead of menu navigation.
And yes, I know this isn't here now. But it's a vision I can get behind for greater privacy and less exposure of 'me' on the Internet.
I'm still confused about the disappearing incentives for publishing anything on the open web.
If bots do all the searching, there's no ad revenue. This means that all the good content becomes paywalled. This means the bots don't have quality sources to mine.
Surely the fact that this can't search all the latest academic studies means it's inherently limited.
The implications of OpenAI's Deep Research extend far beyond its name's academic connotations. This capability represents a fundamental shift in how we interact with digital services, with immediate implications for e-commerce.
From looking for a laptop with specific technical requirements, finding the perfect gift for your gadget-loving friend, or booking a customized vacation package. Deep Research enables AI agents to conduct comprehensive market analysis aligned with nuanced user preferences. When combined with OpenAI's Operator, this creates a powerful paradigm shift in consumer behavior - moving from manual browsing to delegated, AI-driven decision-making. This transformation presents a strategic dilemma for e-commerce platforms. Their business models rely on human traffic driving advertising revenue and promotional engagement, yet resisting AI agents risks losing market share in an increasingly automated commercial landscape.
This evolution raises critical questions about platform adaptation, revenue models, and the future of digital commerce, with no one waiting for answers. This is the new internet https://www.aitidbits.ai/p/agent-responsive-design
If machines get better at thinking, analyzing, and problem-solving, what happens to human expertise? Do we start relying on AI so much that we lose the ability to question and think critically ourselves?
Of course! History is a continual example of cultural production being increasingly widely distributed and being diluted in the process--this is often how it's distributed in the first place. You generally don't notice the shift until middle age or later.
To cite one example, an audiobook is no substitute whatsoever for a printed text, especially an important or difficult one. (LPT: if you can drive and pay sufficient attention to the text being read, you're doing it wrong.) The number of people listening to audiobooks has exploded, but general understanding has decreased and is starting to be normalized as "I can multitask" or "I don't have time." That's what it looks like.
Absolutely! Thrilled that you get that and that is the reason, in our chosen field i.e., anti-financial crime, we are investing to build a community of skilled professionals with verifiable skills. More here - https://www.clientfabric.com/cddp
Thank you for your reflections on Deep Research. After spending a few days with OpenAI’s latest model, I was struck by how truly transformational it feels—without requiring the attainment of AGI. For organizations that are constantly pushed to innovate, refine, and act on ambitious ideas—yet often lack the resources to fully execute them—this seems like a missing piece. It’s a powerful enabler for turning intellectual capital into impact. Now, if only there were a seamless way to export reports into Word documents!
Thanks Ethan for another excellent article and sharing your insights.
I wonder what would happen if we add to the prompt something like "ıdentify some of the most important research papers and if they are paywalled let me know what they are and I will gain access and upload them here. Do not write the article until those have been uploaded. "
Anyone doing serious research or associated with a university would have access to many of those sites. I think sooner or later people will upload paywalled articles in deep research agents and that may result in even higher quality papers being generated. İn the long term I can see partnerships being formed between AI companies and the paywall companies... for a combined, higher subscription fee... This would also eliminate the manual upload of the articles.
I'm curious to see how well this holds up under more widespread evaluation. Don't get me wrong, this is no doubt impressive and surely useful, but we're already seeing reports of hallucinations and subtle (but important) inaccuracies.
Do you (Eathon Mollick or others, think, if say, we merge OpenAIs deep research to what you think are the top research archives within your field, that we would get answers that actually are as good as researchers trying to apply research on some real world problems? Or at least gives novel ideas based on the forefront of research which then can be considered by humans for either further research or application on real world problems?
They did and Ethan highlights at the end of that article. He just talks about opening hours version because he finds it a bit more useful in the Google version
It's interesting and a little eerie that they both converge on some of the same sources, and many of the same conclusions (if my admittedly tired and fast reading is right). I've seen this convergence between LLMs before.
The trouble is we won't know what they're missing, and however large the models, mechanisms that can produce such similar results seem exploitable. But maybe "personal brand building" always was.
I also wonder how the particular engagement-based feedback of its "process" colors the perception of the results. There are things going on here on a software persuasion level that are also confounds.
What strikes me is the common sense displayed in the reasoning screenshots above. There are many other kinds of "reasoning," critical thinking, analysis, reflection, etc... If I come at this from a Habermasian perspective, "reasons" are explications of validity claims made by one person on another. For Habermas, these "claims to truth" included facticity, sincerity, authority (normative rightness), and intelligibility.
I'll be interested to see how varied these reasoning methods and styles can become. Can models be capable of reasoning "as if" they were cognitive behavioral therapists (a field known for its linguistic interventions, captured in session transcripts, even scored by therapist and patient for efficacy, and formalized in text and self-help books). Can models reason from a position of popular appeal, arguing why this and not that musical artist should have won a Grammy...
Impressive as this deep research reasoning is, it will be deemed dull and rote by those in the profession, because the reasoning explication is "nerdy" and obvious to an expert practitioner. AI's would be limited to filling in background and covering bases, and possibly leaned on by new hires etc as some kind of knowledge and insight padding. Really impressive would be models, perhaps even in MoE architectures, capable of furnishing competing reasons and reasons from accepted and novel perspectives - models even capable of reflecting on the "bias" intrinsic to the perspective they're using. There's research into "argumentation" and use of knowledge graphs that might come into play here.
Anyways, very interesting signs. Things moving apace.
I think we’re quite far from autonomous intelligent robots, but quite close to disembodied white collar AIs that are better than 95% of people at basically all white collar jobs (including soft skills like marketing, customer interaction, empathy, and pleasantness). If I think about that for just a little, it’s both very scary and also melancholy inducing. A lot of things we currently assume (large population = big GDP, the importance of human capital, of education and intelligence generally) might likely be voided within say 5 years. And I’m not sure we as a society are ready for that. It will be interesting anyway!
I'm a qual researcher with a lot of experience who works on identifying lead customers to create PMF. The result of Ethan's prompt was the first piece of LLM content that impressed me. It was all stuff I know, but it's not basic, and it sparked a number of content ideas while I was reading it.
Ethan also knew what to ask, and knew how to know what to ask.
I'm not worried AI is coming for my job. My whole point is that my value comes from my weirdness (which is pretty high even in comparison to weird people). My skills are unusual but not weird. Where and how I deploy them, is. This is a common pattern among other indie consultants.
Nobody will build a weird AI because it's not worth it. They think in terms of scale (which is the definition of not weird). Look humans: the human future is small. There's just a lot more of it.
I think one issue that will come up in the medium term is that as certain segments of the elite and government say “just learn a trade bro” (like people used to say just learn to code bro), people will take them seriously and flood the trades or anything else that is temporarily protected (government jobs, jobs behind licensing requirements), and thereby drastically lower the market rate salaries in those areas. Angering everyone: the current workers in those areas, the new workers who expected higher salaries, etc. There is no satisfactory equilibrium if 50% of workers, especially the better paid, better educated, easier lifestyle 50%, lose their jobs and are left scrounging for scraps
These folks are already not my customers :(
We are seeing tremendous interest in trades, mostly because of the inadequately-publicized Inflation Reduction Act. At least some of the falloff in college enrollment is based on aggressive recruiting by onshoring trades...and they remain desperate for workers. Saturation point definitely exists, but the way the Act is set up, it is a long way off IMO. Working with a guy currently who's working on this problem. It is a much bigger deal than most of my circle realizes.
I wish this meant that I would have to completely overhaul my thesis research course for my graduate students. They still need a lot of help figuring out how to get the most out of basic chatbots. I was surprised! I thought they would be all over it... A custom GPT Thesis Research Tutor helped. If my students suddenly figure all this stuff out and leap ahead of me, I'll be happy to rework the assignments so they continue to learn critical thinking and interpretation skills. It will be a good challenge for my own to do so!
Is the tutor available to all, or is it your proprietary creation? Thx.
Thanks for asking, Rob! It's very specific to the Interaction Design thesis research course I teach at the School of Visual Arts. So yes, my proprietary creation. I started it with Ethan's AI Tutor Blueprint GPT, which you can take for a spin yourself: https://chatgpt.com/g/g-UnO5np1uO-ai-tutor-blueprint
(Here's asking forgiveness from Ethan for sharing some of your book pre-order content publicly here... 😉)
Ethan, I just can't thank you enough for your thoughtfully written insightful posts. I was just mulling over my impressions of an interview with Karina Nguyen from OpenAI (and previously Anthropic) discusses her work on Claude, ChatGPT Canvas and Tasks, and the new AI interaction paradigms for human-computer collaboration at https://www.latent.space/p/karina
It's so helpful to listen to a software developer who is deeply enmeshed in the design tradeoffs of AI and the impact user interface design. She makes a number of noteworthy comments toward the end about the evolution of reasoners toward entirely new ways of interfacing with the Internet which has become a minefield of wasted time and torturous navigation tricks.
A shift coming from the current app-centric model to a task-centric model where the interface and functionality generate themselves based on what you're trying to accomplish. The key difference is that instead of you adapting to the computer's interface, the computer would adapt its interface to you.
Let me break this down another way. This shift would mean you do not need to remember which apps do what, no switching between multiple interfaces and using Natural language commands instead of menu navigation.
And yes, I know this isn't here now. But it's a vision I can get behind for greater privacy and less exposure of 'me' on the Internet.
I'm still confused about the disappearing incentives for publishing anything on the open web.
If bots do all the searching, there's no ad revenue. This means that all the good content becomes paywalled. This means the bots don't have quality sources to mine.
Surely the fact that this can't search all the latest academic studies means it's inherently limited.
The other day I started wondering whether there is a future in “ads” of the sort that lure bots to click through, rather than luring humans.
There's a whole future of nefarious tricks and exploits up ahead...
This is absolutely going to be a reality in the future.
The implications of OpenAI's Deep Research extend far beyond its name's academic connotations. This capability represents a fundamental shift in how we interact with digital services, with immediate implications for e-commerce.
From looking for a laptop with specific technical requirements, finding the perfect gift for your gadget-loving friend, or booking a customized vacation package. Deep Research enables AI agents to conduct comprehensive market analysis aligned with nuanced user preferences. When combined with OpenAI's Operator, this creates a powerful paradigm shift in consumer behavior - moving from manual browsing to delegated, AI-driven decision-making. This transformation presents a strategic dilemma for e-commerce platforms. Their business models rely on human traffic driving advertising revenue and promotional engagement, yet resisting AI agents risks losing market share in an increasingly automated commercial landscape.
This evolution raises critical questions about platform adaptation, revenue models, and the future of digital commerce, with no one waiting for answers. This is the new internet https://www.aitidbits.ai/p/agent-responsive-design
If machines get better at thinking, analyzing, and problem-solving, what happens to human expertise? Do we start relying on AI so much that we lose the ability to question and think critically ourselves?
Of course! History is a continual example of cultural production being increasingly widely distributed and being diluted in the process--this is often how it's distributed in the first place. You generally don't notice the shift until middle age or later.
To cite one example, an audiobook is no substitute whatsoever for a printed text, especially an important or difficult one. (LPT: if you can drive and pay sufficient attention to the text being read, you're doing it wrong.) The number of people listening to audiobooks has exploded, but general understanding has decreased and is starting to be normalized as "I can multitask" or "I don't have time." That's what it looks like.
In simple words, evolution baby? :D
The thing about evolution we tend to ignore is how many species get bulldozed in the process...
What we're going to see evolve is a growing divide between the fixed mindset human and the growth mindset human.
Fixed mindset human is going to keep losing market share to variations of these advances in generative intelligence and people using it.
This is going to be much more pronounced in the knowledge worker fields, and not as pronounced in the artisan and services fields.
At some point soon-ish, this will get commoditized, and it will be the top 2-5% who really separate themselves from the herds.
New business models will emerge to capture the full potential of this super class of creatives across the entire spectrum of the economic landscape.
So, the world gets flooded with high-class research that not many can understand? Or do we outsource understanding to the same machines?
My feeling is that the demand for high-level human reasoning is increasing, and this individual process cannot be outsourced.
Roles switch to moderation, oversight and yes, making sense of it in a way that the ultimate consumer cares.
For us to moderate and oversee competently don't we need to understand the material at some level?
Absolutely! Thrilled that you get that and that is the reason, in our chosen field i.e., anti-financial crime, we are investing to build a community of skilled professionals with verifiable skills. More here - https://www.clientfabric.com/cddp
Thank you for your reflections on Deep Research. After spending a few days with OpenAI’s latest model, I was struck by how truly transformational it feels—without requiring the attainment of AGI. For organizations that are constantly pushed to innovate, refine, and act on ambitious ideas—yet often lack the resources to fully execute them—this seems like a missing piece. It’s a powerful enabler for turning intellectual capital into impact. Now, if only there were a seamless way to export reports into Word documents!
Thanks Ethan for another excellent article and sharing your insights.
I wonder what would happen if we add to the prompt something like "ıdentify some of the most important research papers and if they are paywalled let me know what they are and I will gain access and upload them here. Do not write the article until those have been uploaded. "
Anyone doing serious research or associated with a university would have access to many of those sites. I think sooner or later people will upload paywalled articles in deep research agents and that may result in even higher quality papers being generated. İn the long term I can see partnerships being formed between AI companies and the paywall companies... for a combined, higher subscription fee... This would also eliminate the manual upload of the articles.
I'm curious to see how well this holds up under more widespread evaluation. Don't get me wrong, this is no doubt impressive and surely useful, but we're already seeing reports of hallucinations and subtle (but important) inaccuracies.
Do you (Eathon Mollick or others, think, if say, we merge OpenAIs deep research to what you think are the top research archives within your field, that we would get answers that actually are as good as researchers trying to apply research on some real world problems? Or at least gives novel ideas based on the forefront of research which then can be considered by humans for either further research or application on real world problems?
Ethan: In the middle column of the thought process screenshot, this line appears:
"I am exploring a new neighborhood, discovering unique places and flavors."
Is it being metaphorical? Waxing poetic? A (hopefully harmless) hallucination? ....
It regularly expresses “joy” at new ideas
Hi Ethan. Where or how do you view the ‘thought process’?
The first? Didn't Google launch Deep Research last year? I use it everyday to prepare expert reports.
They did and Ethan highlights at the end of that article. He just talks about opening hours version because he finds it a bit more useful in the Google version
It's interesting and a little eerie that they both converge on some of the same sources, and many of the same conclusions (if my admittedly tired and fast reading is right). I've seen this convergence between LLMs before.
The trouble is we won't know what they're missing, and however large the models, mechanisms that can produce such similar results seem exploitable. But maybe "personal brand building" always was.
I also wonder how the particular engagement-based feedback of its "process" colors the perception of the results. There are things going on here on a software persuasion level that are also confounds.
What strikes me is the common sense displayed in the reasoning screenshots above. There are many other kinds of "reasoning," critical thinking, analysis, reflection, etc... If I come at this from a Habermasian perspective, "reasons" are explications of validity claims made by one person on another. For Habermas, these "claims to truth" included facticity, sincerity, authority (normative rightness), and intelligibility.
I'll be interested to see how varied these reasoning methods and styles can become. Can models be capable of reasoning "as if" they were cognitive behavioral therapists (a field known for its linguistic interventions, captured in session transcripts, even scored by therapist and patient for efficacy, and formalized in text and self-help books). Can models reason from a position of popular appeal, arguing why this and not that musical artist should have won a Grammy...
Impressive as this deep research reasoning is, it will be deemed dull and rote by those in the profession, because the reasoning explication is "nerdy" and obvious to an expert practitioner. AI's would be limited to filling in background and covering bases, and possibly leaned on by new hires etc as some kind of knowledge and insight padding. Really impressive would be models, perhaps even in MoE architectures, capable of furnishing competing reasons and reasons from accepted and novel perspectives - models even capable of reflecting on the "bias" intrinsic to the perspective they're using. There's research into "argumentation" and use of knowledge graphs that might come into play here.
Anyways, very interesting signs. Things moving apace.