I think we’re quite far from autonomous intelligent robots, but quite close to disembodied white collar AIs that are better than 95% of people at basically all white collar jobs (including soft skills like marketing, customer interaction, empathy, and pleasantness). If I think about that for just a little, it’s both very scary and also melancholy inducing. A lot of things we currently assume (large population = big GDP, the importance of human capital, of education and intelligence generally) might likely be voided within say 5 years. And I’m not sure we as a society are ready for that. It will be interesting anyway!
I'm a qual researcher with a lot of experience who works on identifying lead customers to create PMF. The result of Ethan's prompt was the first piece of LLM content that impressed me. It was all stuff I know, but it's not basic, and it sparked a number of content ideas while I was reading it.
Ethan also knew what to ask, and knew how to know what to ask.
I'm not worried AI is coming for my job. My whole point is that my value comes from my weirdness (which is pretty high even in comparison to weird people). My skills are unusual but not weird. Where and how I deploy them, is. This is a common pattern among other indie consultants.
Nobody will build a weird AI because it's not worth it. They think in terms of scale (which is the definition of not weird). Look humans: the human future is small. There's just a lot more of it.
I think one issue that will come up in the medium term is that as certain segments of the elite and government say “just learn a trade bro” (like people used to say just learn to code bro), people will take them seriously and flood the trades or anything else that is temporarily protected (government jobs, jobs behind licensing requirements), and thereby drastically lower the market rate salaries in those areas. Angering everyone: the current workers in those areas, the new workers who expected higher salaries, etc. There is no satisfactory equilibrium if 50% of workers, especially the better paid, better educated, easier lifestyle 50%, lose their jobs and are left scrounging for scraps
We are seeing tremendous interest in trades, mostly because of the inadequately-publicized Inflation Reduction Act. At least some of the falloff in college enrollment is based on aggressive recruiting by onshoring trades...and they remain desperate for workers. Saturation point definitely exists, but the way the Act is set up, it is a long way off IMO. Working with a guy currently who's working on this problem. It is a much bigger deal than most of my circle realizes.
I wish this meant that I would have to completely overhaul my thesis research course for my graduate students. They still need a lot of help figuring out how to get the most out of basic chatbots. I was surprised! I thought they would be all over it... A custom GPT Thesis Research Tutor helped. If my students suddenly figure all this stuff out and leap ahead of me, I'll be happy to rework the assignments so they continue to learn critical thinking and interpretation skills. It will be a good challenge for my own to do so!
Thanks for asking, Rob! It's very specific to the Interaction Design thesis research course I teach at the School of Visual Arts. So yes, my proprietary creation. I started it with Ethan's AI Tutor Blueprint GPT, which you can take for a spin yourself: https://chatgpt.com/g/g-UnO5np1uO-ai-tutor-blueprint
(Here's asking forgiveness from Ethan for sharing some of your book pre-order content publicly here... 😉)
Ethan, I just can't thank you enough for your thoughtfully written insightful posts. I was just mulling over my impressions of an interview with Karina Nguyen from OpenAI (and previously Anthropic) discusses her work on Claude, ChatGPT Canvas and Tasks, and the new AI interaction paradigms for human-computer collaboration at https://www.latent.space/p/karina
It's so helpful to listen to a software developer who is deeply enmeshed in the design tradeoffs of AI and the impact user interface design. She makes a number of noteworthy comments toward the end about the evolution of reasoners toward entirely new ways of interfacing with the Internet which has become a minefield of wasted time and torturous navigation tricks.
A shift coming from the current app-centric model to a task-centric model where the interface and functionality generate themselves based on what you're trying to accomplish. The key difference is that instead of you adapting to the computer's interface, the computer would adapt its interface to you.
Let me break this down another way. This shift would mean you do not need to remember which apps do what, no switching between multiple interfaces and using Natural language commands instead of menu navigation.
And yes, I know this isn't here now. But it's a vision I can get behind for greater privacy and less exposure of 'me' on the Internet.
I'm still confused about the disappearing incentives for publishing anything on the open web.
If bots do all the searching, there's no ad revenue. This means that all the good content becomes paywalled. This means the bots don't have quality sources to mine.
Surely the fact that this can't search all the latest academic studies means it's inherently limited.
The implications of OpenAI's Deep Research extend far beyond its name's academic connotations. This capability represents a fundamental shift in how we interact with digital services, with immediate implications for e-commerce.
From looking for a laptop with specific technical requirements, finding the perfect gift for your gadget-loving friend, or booking a customized vacation package. Deep Research enables AI agents to conduct comprehensive market analysis aligned with nuanced user preferences. When combined with OpenAI's Operator, this creates a powerful paradigm shift in consumer behavior - moving from manual browsing to delegated, AI-driven decision-making. This transformation presents a strategic dilemma for e-commerce platforms. Their business models rely on human traffic driving advertising revenue and promotional engagement, yet resisting AI agents risks losing market share in an increasingly automated commercial landscape.
This evolution raises critical questions about platform adaptation, revenue models, and the future of digital commerce, with no one waiting for answers. This is the new internet https://www.aitidbits.ai/p/agent-responsive-design
If machines get better at thinking, analyzing, and problem-solving, what happens to human expertise? Do we start relying on AI so much that we lose the ability to question and think critically ourselves?
Of course! History is a continual example of cultural production being increasingly widely distributed and being diluted in the process--this is often how it's distributed in the first place. You generally don't notice the shift until middle age or later.
To cite one example, an audiobook is no substitute whatsoever for a printed text, especially an important or difficult one. (LPT: if you can drive and pay sufficient attention to the text being read, you're doing it wrong.) The number of people listening to audiobooks has exploded, but general understanding has decreased and is starting to be normalized as "I can multitask" or "I don't have time." That's what it looks like.
Absolutely! Thrilled that you get that and that is the reason, in our chosen field i.e., anti-financial crime, we are investing to build a community of skilled professionals with verifiable skills. More here - https://www.clientfabric.com/cddp
They did and Ethan highlights at the end of that article. He just talks about opening hours version because he finds it a bit more useful in the Google version
It's interesting and a little eerie that they both converge on some of the same sources, and many of the same conclusions (if my admittedly tired and fast reading is right). I've seen this convergence between LLMs before.
The trouble is we won't know what they're missing, and however large the models, mechanisms that can produce such similar results seem exploitable. But maybe "personal brand building" always was.
I also wonder how the particular engagement-based feedback of its "process" colors the perception of the results. There are things going on here on a software persuasion level that are also confounds.
What strikes me is the common sense displayed in the reasoning screenshots above. There are many other kinds of "reasoning," critical thinking, analysis, reflection, etc... If I come at this from a Habermasian perspective, "reasons" are explications of validity claims made by one person on another. For Habermas, these "claims to truth" included facticity, sincerity, authority (normative rightness), and intelligibility.
I'll be interested to see how varied these reasoning methods and styles can become. Can models be capable of reasoning "as if" they were cognitive behavioral therapists (a field known for its linguistic interventions, captured in session transcripts, even scored by therapist and patient for efficacy, and formalized in text and self-help books). Can models reason from a position of popular appeal, arguing why this and not that musical artist should have won a Grammy...
Impressive as this deep research reasoning is, it will be deemed dull and rote by those in the profession, because the reasoning explication is "nerdy" and obvious to an expert practitioner. AI's would be limited to filling in background and covering bases, and possibly leaned on by new hires etc as some kind of knowledge and insight padding. Really impressive would be models, perhaps even in MoE architectures, capable of furnishing competing reasons and reasons from accepted and novel perspectives - models even capable of reflecting on the "bias" intrinsic to the perspective they're using. There's research into "argumentation" and use of knowledge graphs that might come into play here.
Anyways, very interesting signs. Things moving apace.
If you think "Deep Research" is an overused term now, wait until you hear about Genspark's recent feature, which does largely the same thing, although likely at a much more superficial level than the new OpenAI agent.
I think this is a common trope within new technology where we see the positive benefits of what we gain but we don't seem to understand what we lose with the adoption of said new technology
Can't it be both simultaneously exciting and scary at the same time? Tho I am very scared and really trying to find words to my feelings as 25 y/o starting its career in a new era.
To me, excitement is positive stimulation. This is definitely stimulating, but in a negative way. So, in my mind, something is either exciting or scary. This is more of the latter.
There is a concept that drove a lot of 18th century art and philosophy, of “the sublime” - it is the feeling of awe inspiring grandeur that is both exciting and frightening. Like reaching the top of a mountain and seeing the cliff and waterfall on the other side. Like getting out on the open ocean and contemplating its depth. Like Hubble realizing that those “nebulas” are actually other galaxies as big as the Milky Way, far away from our own.
We're watching innovation in an entirely new field as it rapidly unfolds, the likes of which we haven't witnessed since the dawn of the internet itself. Most human beings that have ever lived saw no significant technological changes in their entire lifetimes. Be excited!
Change in and of itself is neither good nor exciting, unless it leads to a more prosperous and favourable circumstance. I fail to see this particular change leading to a prosperous or favourable circumstance. Can you?
Far more people are beneficiaries of others doing research than earn their own benefit by doing research. If this change leads to more medium quality research, then that could be a bigger benefit to prosperity for these beneficiaries than the harm it causes to researchers. So I definitely can see this change leading to more prosperous and favorable circumstances. Though there are definitely some important “if”s in there.
This tool is undoubtedly one of the most amazing tools I've come across. And as paraphrased in your book, this version of generative AI is the worst version I've ever used.
A thought-provoking take on the evolution of search! As AI reshapes how we find and process information, it’s exciting (and a little unsettling) to consider what comes next. Great insights!
I think we’re quite far from autonomous intelligent robots, but quite close to disembodied white collar AIs that are better than 95% of people at basically all white collar jobs (including soft skills like marketing, customer interaction, empathy, and pleasantness). If I think about that for just a little, it’s both very scary and also melancholy inducing. A lot of things we currently assume (large population = big GDP, the importance of human capital, of education and intelligence generally) might likely be voided within say 5 years. And I’m not sure we as a society are ready for that. It will be interesting anyway!
I'm a qual researcher with a lot of experience who works on identifying lead customers to create PMF. The result of Ethan's prompt was the first piece of LLM content that impressed me. It was all stuff I know, but it's not basic, and it sparked a number of content ideas while I was reading it.
Ethan also knew what to ask, and knew how to know what to ask.
I'm not worried AI is coming for my job. My whole point is that my value comes from my weirdness (which is pretty high even in comparison to weird people). My skills are unusual but not weird. Where and how I deploy them, is. This is a common pattern among other indie consultants.
Nobody will build a weird AI because it's not worth it. They think in terms of scale (which is the definition of not weird). Look humans: the human future is small. There's just a lot more of it.
I think one issue that will come up in the medium term is that as certain segments of the elite and government say “just learn a trade bro” (like people used to say just learn to code bro), people will take them seriously and flood the trades or anything else that is temporarily protected (government jobs, jobs behind licensing requirements), and thereby drastically lower the market rate salaries in those areas. Angering everyone: the current workers in those areas, the new workers who expected higher salaries, etc. There is no satisfactory equilibrium if 50% of workers, especially the better paid, better educated, easier lifestyle 50%, lose their jobs and are left scrounging for scraps
These folks are already not my customers :(
We are seeing tremendous interest in trades, mostly because of the inadequately-publicized Inflation Reduction Act. At least some of the falloff in college enrollment is based on aggressive recruiting by onshoring trades...and they remain desperate for workers. Saturation point definitely exists, but the way the Act is set up, it is a long way off IMO. Working with a guy currently who's working on this problem. It is a much bigger deal than most of my circle realizes.
I wish this meant that I would have to completely overhaul my thesis research course for my graduate students. They still need a lot of help figuring out how to get the most out of basic chatbots. I was surprised! I thought they would be all over it... A custom GPT Thesis Research Tutor helped. If my students suddenly figure all this stuff out and leap ahead of me, I'll be happy to rework the assignments so they continue to learn critical thinking and interpretation skills. It will be a good challenge for my own to do so!
Is the tutor available to all, or is it your proprietary creation? Thx.
Thanks for asking, Rob! It's very specific to the Interaction Design thesis research course I teach at the School of Visual Arts. So yes, my proprietary creation. I started it with Ethan's AI Tutor Blueprint GPT, which you can take for a spin yourself: https://chatgpt.com/g/g-UnO5np1uO-ai-tutor-blueprint
(Here's asking forgiveness from Ethan for sharing some of your book pre-order content publicly here... 😉)
Ethan, I just can't thank you enough for your thoughtfully written insightful posts. I was just mulling over my impressions of an interview with Karina Nguyen from OpenAI (and previously Anthropic) discusses her work on Claude, ChatGPT Canvas and Tasks, and the new AI interaction paradigms for human-computer collaboration at https://www.latent.space/p/karina
It's so helpful to listen to a software developer who is deeply enmeshed in the design tradeoffs of AI and the impact user interface design. She makes a number of noteworthy comments toward the end about the evolution of reasoners toward entirely new ways of interfacing with the Internet which has become a minefield of wasted time and torturous navigation tricks.
A shift coming from the current app-centric model to a task-centric model where the interface and functionality generate themselves based on what you're trying to accomplish. The key difference is that instead of you adapting to the computer's interface, the computer would adapt its interface to you.
Let me break this down another way. This shift would mean you do not need to remember which apps do what, no switching between multiple interfaces and using Natural language commands instead of menu navigation.
And yes, I know this isn't here now. But it's a vision I can get behind for greater privacy and less exposure of 'me' on the Internet.
I'm still confused about the disappearing incentives for publishing anything on the open web.
If bots do all the searching, there's no ad revenue. This means that all the good content becomes paywalled. This means the bots don't have quality sources to mine.
Surely the fact that this can't search all the latest academic studies means it's inherently limited.
The other day I started wondering whether there is a future in “ads” of the sort that lure bots to click through, rather than luring humans.
There's a whole future of nefarious tricks and exploits up ahead...
This is absolutely going to be a reality in the future.
The implications of OpenAI's Deep Research extend far beyond its name's academic connotations. This capability represents a fundamental shift in how we interact with digital services, with immediate implications for e-commerce.
From looking for a laptop with specific technical requirements, finding the perfect gift for your gadget-loving friend, or booking a customized vacation package. Deep Research enables AI agents to conduct comprehensive market analysis aligned with nuanced user preferences. When combined with OpenAI's Operator, this creates a powerful paradigm shift in consumer behavior - moving from manual browsing to delegated, AI-driven decision-making. This transformation presents a strategic dilemma for e-commerce platforms. Their business models rely on human traffic driving advertising revenue and promotional engagement, yet resisting AI agents risks losing market share in an increasingly automated commercial landscape.
This evolution raises critical questions about platform adaptation, revenue models, and the future of digital commerce, with no one waiting for answers. This is the new internet https://www.aitidbits.ai/p/agent-responsive-design
If machines get better at thinking, analyzing, and problem-solving, what happens to human expertise? Do we start relying on AI so much that we lose the ability to question and think critically ourselves?
Of course! History is a continual example of cultural production being increasingly widely distributed and being diluted in the process--this is often how it's distributed in the first place. You generally don't notice the shift until middle age or later.
To cite one example, an audiobook is no substitute whatsoever for a printed text, especially an important or difficult one. (LPT: if you can drive and pay sufficient attention to the text being read, you're doing it wrong.) The number of people listening to audiobooks has exploded, but general understanding has decreased and is starting to be normalized as "I can multitask" or "I don't have time." That's what it looks like.
In simple words, evolution baby? :D
The thing about evolution we tend to ignore is how many species get bulldozed in the process...
What we're going to see evolve is a growing divide between the fixed mindset human and the growth mindset human.
Fixed mindset human is going to keep losing market share to variations of these advances in generative intelligence and people using it.
This is going to be much more pronounced in the knowledge worker fields, and not as pronounced in the artisan and services fields.
At some point soon-ish, this will get commoditized, and it will be the top 2-5% who really separate themselves from the herds.
New business models will emerge to capture the full potential of this super class of creatives across the entire spectrum of the economic landscape.
So, the world gets flooded with high-class research that not many can understand? Or do we outsource understanding to the same machines?
My feeling is that the demand for high-level human reasoning is increasing, and this individual process cannot be outsourced.
Roles switch to moderation, oversight and yes, making sense of it in a way that the ultimate consumer cares.
For us to moderate and oversee competently don't we need to understand the material at some level?
Absolutely! Thrilled that you get that and that is the reason, in our chosen field i.e., anti-financial crime, we are investing to build a community of skilled professionals with verifiable skills. More here - https://www.clientfabric.com/cddp
Ethan: In the middle column of the thought process screenshot, this line appears:
"I am exploring a new neighborhood, discovering unique places and flavors."
Is it being metaphorical? Waxing poetic? A (hopefully harmless) hallucination? ....
It regularly expresses “joy” at new ideas
Hi Ethan. Where or how do you view the ‘thought process’?
The first? Didn't Google launch Deep Research last year? I use it everyday to prepare expert reports.
They did and Ethan highlights at the end of that article. He just talks about opening hours version because he finds it a bit more useful in the Google version
It's interesting and a little eerie that they both converge on some of the same sources, and many of the same conclusions (if my admittedly tired and fast reading is right). I've seen this convergence between LLMs before.
The trouble is we won't know what they're missing, and however large the models, mechanisms that can produce such similar results seem exploitable. But maybe "personal brand building" always was.
I also wonder how the particular engagement-based feedback of its "process" colors the perception of the results. There are things going on here on a software persuasion level that are also confounds.
What strikes me is the common sense displayed in the reasoning screenshots above. There are many other kinds of "reasoning," critical thinking, analysis, reflection, etc... If I come at this from a Habermasian perspective, "reasons" are explications of validity claims made by one person on another. For Habermas, these "claims to truth" included facticity, sincerity, authority (normative rightness), and intelligibility.
I'll be interested to see how varied these reasoning methods and styles can become. Can models be capable of reasoning "as if" they were cognitive behavioral therapists (a field known for its linguistic interventions, captured in session transcripts, even scored by therapist and patient for efficacy, and formalized in text and self-help books). Can models reason from a position of popular appeal, arguing why this and not that musical artist should have won a Grammy...
Impressive as this deep research reasoning is, it will be deemed dull and rote by those in the profession, because the reasoning explication is "nerdy" and obvious to an expert practitioner. AI's would be limited to filling in background and covering bases, and possibly leaned on by new hires etc as some kind of knowledge and insight padding. Really impressive would be models, perhaps even in MoE architectures, capable of furnishing competing reasons and reasons from accepted and novel perspectives - models even capable of reflecting on the "bias" intrinsic to the perspective they're using. There's research into "argumentation" and use of knowledge graphs that might come into play here.
Anyways, very interesting signs. Things moving apace.
If you think "Deep Research" is an overused term now, wait until you hear about Genspark's recent feature, which does largely the same thing, although likely at a much more superficial level than the new OpenAI agent.
It's called...wait for it...Deep Research.
Google "Genspark agents."
It's baffling to me when such pieces of news are presented in an upbeat tone of fascination. This instills nothing but awe and melancholy.
I think this is a common trope within new technology where we see the positive benefits of what we gain but we don't seem to understand what we lose with the adoption of said new technology
Can't it be both simultaneously exciting and scary at the same time? Tho I am very scared and really trying to find words to my feelings as 25 y/o starting its career in a new era.
To me, excitement is positive stimulation. This is definitely stimulating, but in a negative way. So, in my mind, something is either exciting or scary. This is more of the latter.
There is a concept that drove a lot of 18th century art and philosophy, of “the sublime” - it is the feeling of awe inspiring grandeur that is both exciting and frightening. Like reaching the top of a mountain and seeing the cliff and waterfall on the other side. Like getting out on the open ocean and contemplating its depth. Like Hubble realizing that those “nebulas” are actually other galaxies as big as the Milky Way, far away from our own.
Fair enough - thanks for clarifying.
We're watching innovation in an entirely new field as it rapidly unfolds, the likes of which we haven't witnessed since the dawn of the internet itself. Most human beings that have ever lived saw no significant technological changes in their entire lifetimes. Be excited!
Change in and of itself is neither good nor exciting, unless it leads to a more prosperous and favourable circumstance. I fail to see this particular change leading to a prosperous or favourable circumstance. Can you?
Far more people are beneficiaries of others doing research than earn their own benefit by doing research. If this change leads to more medium quality research, then that could be a bigger benefit to prosperity for these beneficiaries than the harm it causes to researchers. So I definitely can see this change leading to more prosperous and favorable circumstances. Though there are definitely some important “if”s in there.
That's what Og said about the spear
This tool is undoubtedly one of the most amazing tools I've come across. And as paraphrased in your book, this version of generative AI is the worst version I've ever used.
A thought-provoking take on the evolution of search! As AI reshapes how we find and process information, it’s exciting (and a little unsettling) to consider what comes next. Great insights!